Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Register
Journal2011Fall-1.Understanding Your Financial Data: Tips and Tricks

ITFMA Journal

Understanding Your Financial Data: Tips and Tricks for Achieving 360º Visibility

Howard Hastings, Senior Director, Blazent

These are edited excerpts from Howard Hasting’s presentation at theITFMA Financial World of Information Technology Conference August 2010

When it comes to producing IT financial reports, the chief complaint heard is that too many data values are either missing or inconsistently represented. This creates a lack of organizational support because the output of IT financial management often must be reworked by each individual recipient to meet their own needs. Further, that assumes the person in question can obtain the missing information and is skilled enough to manipulate the data to deliver the desired analysis. This article will present a bottom-up approach to developing a data model that uncovers the available data sources within each organization.

Why is IT-produced information so bad?
Much of the data from technology products was NOT created for YOUR use. It primarily supports vendor problem diagnosis. Metrics are proprietary with few existing industry standards. Also, customer-oriented output is not always well architected. It is designed by vendors and too often without adequate customer input. The content is proprietary with few available industry standards, so there is little to no quality control.

What is the solution?

It is a data sourcing approach using a data model that is designed to outcomes.

  • Avoid starting out with end-all, be-all committee-based efforts. The big-bang approach always goes bust.
  • Concentrate on highest-value, highest priority deliverables. These are highly visible (or "political”) business initiatives, e.g. merger & acquisition, new ventures, budget slashing. They involve deadline sensitive inquiries as with audits and regulatory requirements.
  • Strive for the minimum data requirements.
  • Research what was used before for the same purpose. Don’t forget, your organization has been operating for a long time.
  • Establish your desired outcomes. Determine the need for high-performance "fat-client” vs. lower-performance "thin-client”
  • Establish your required inputs, such as, list of all users, users’ computer specifications (CPU type and speed, RAM amount), total application usage by user (number of accesses, duration of access)
  • Establish your constraints that are discovered during data sourcing

It is crucial to understand your data.

  • Determine the quantity of all the data records involved. You must know the total numbers at the beginning, in case you have to exclude "un-processable” records
  • Assess the quality of each and every data set. This involves statistical analysis of what’s complete and clean vs. everything else. Document the context and vocabulary of all the data elements. If there is more than one source application, a hierarchy of "trust” must be established. Consider the methods used to create data, e.g., collected and reported "raw” vs. normalized vs. calculated data. In the data descriptions, consider what the data element represents and how it is used.

Establish the automation requirements.

  • Identify your available existing tools. Determine their capabilities, e.g., extract-transform-load (ETL) vs. CSV import/export, tabular reporting vs. graphical reporting, pre-defined transforms vs. keyword operators vs. "raw” SQL. Assess their functionality, e.g., graphical drag-n-drop UI vs. cryptic command statements, manual configuration vs. wizards/templates vs. self-learning
  • Assess the technology gaps between your existing tools vs. needed new tools, then do a cost-benefit analysis (build vs. buy) for the new tools.

Next, build the "knowledgebase” required for all data "processing.” Acknowledge up-front that it will require a "community” effort, so identify and gain buy-in from all affected stakeholders. Then conduct a "proof-of-concept” and adjust based on input from the stakeholders. Don’t forget data categorization and metadata. Often data about the data involved sheds new light on the analysis or decision being addressed.

When a complete set of empirical data is unavailable, you have to replace the missing data elements with something. For making estimates to fill in for missing data, the most acceptable technique to use when "good” data exists is the "80/20 rule” by basing the estimate on 80% of the input. If the total data used as input represents only a sample of what should or could be available, you MUST know all of the parameters. Remember, your organization has been operating for a long time, so don’t be afraid of the power of educated estimates.

Sometimes you have no alternative other than to base your estimates on other data sources.

  • Vendor customer records are often a reliable source. Be aware of purchasing and maintenance practices within your organization as vendor data may not represent 100% of the actual data. ERP and similar automated service management system data is usually the most trustworthy. Sales and support data can provide "list prices” against which customary discounting rates can be applied. Contract data may be used, but with caution, since valid date terms and amendments can create significant gaps and overlaps in the data.
  • Benchmark information can be used when required data is completely missing, so replacement of many metrics with data from similar organizations may be acceptable. Trade associations or industry analysts are excellent sources. Consultants are usually a less preferred choice. Be aware that senior management may be "nervous” about seeking some data from external sources due to competitive or legal concerns.

The following are various source of standards that may be useful in preparing IT financial reports”

  • International Accounting Standards Board (IASB) provides a framework for the preparation and presentation of financial statements. The International Financial Reporting Standards (IFRS) contain standards issued after 2001, while International Accounting Standards (IAS) contain standards issued before 2001 (
  • Financial Accounting Standards Board (FASB) publishes standards for financial accounting and reporting that foster financial reporting by non-governmental entities and provides decision-useful information to investors and other users of financial reports. (
  • Governmental Accounting Standards Board (GASB) is the government equivalent of FASB (
  • ISO/IEC19770-0 "Overview and Vocabulary” provides an overview of the SAM standards as well as the vocabulary used through out the 19770 series, including identification of how the various standards work together to support an overall SAM eco-system.
  • ISO/IEC19770-2 2009 "Software Identification Tags” was published in November 2009 and contains data standards for software identification tags that provide authoritative identifying information for installed software or other licensable items, such as, fonts or copyrighted papers. It specifies normalized mandatory and optional data elements in a platform-independent XML file structure to support unique software title references. (
  • ISO/IEC19770-3 "Software Entitlement Tags” is currently under development and will contain data standard for software licensing entitlement tags that provide authoritative identifying information about software licensing rights. It specifies metrics to calculate entitlement consumption that are provided through the purchasing process and designed to work in concert with 19770-2. (
  • Security Content Automation Protocol (SCAP) is a method for using specific standards to enable automated vulnerability management, measurement, and policy compliance evaluation. National Vulnerability Database (NVD) is the U.S. government content repository for SCAP (
  • Common Platform Enumeration (CPE) is a structured naming scheme for information technology systems, platforms, and packages. Based upon the generic syntax for Uniform Resource Identifiers (URI), CPE includes a formal name format, a language for describing complex platforms, a method for checking names against a system, and a description format for binding text and tests to a name. (
  • Desktop Management Interface (DMI) was originally designed as the first desktop management standard, and it evolved to become the standard framework for identifying, managing and tracking hardware components in desktop, notebook or server computers.
  • Common Information Model (CIM) is a conceptual schema that defines how the managed elements in an IT environment are represented as a common set of objects and relationships between them.
  • Virtualization Management Initiative (VMAN) is a set of specifications that address the management lifecycle of a virtual environment. VMAN’s Open Virtualization Format (OVF) specification provides a standard format for packaging and describing virtual machines and applications for deployment across heterogeneous virtualization platforms, while VMAN’s profiles standardize many aspects of the operational management of a heterogeneous virtualized environment.


You will remain "on your own” in resolving your problems with missing and inconsistently represented data for IT financial reporting purposes unless you force vendors to conform to industry standards regarding request for proposal (RFP) requirements and influencing "early stage” product development. Get involved with standards efforts, contribute your expertise, and share your requirements with industry analysts.


Copyright © 2011 by the IT Financial Management Association.

more Journal Articles

Sign In

Forgot your password?

Haven't registered yet?


6/26/2017 » 6/30/2017
2017 Financial World of Information Technology, Austin, TX

8/7/2017 » 8/11/2017
2017 World of IT Financial Management, New Orleans, LA

Sponsors & Exhibitors: