The Year of Ampersand Part II: The Never-Ending Story of Completeness and Accuracy

I remember in my early days of auditing, there wasn’t much thought given to the completeness and accuracy (C&A) of information. Certainly, engagement teams tied out subledgers to the GL and performed tests of details or analytics to validate information, but there wasn’t the same consciousness as there is currently in this, the information age. Today, it seems testing completeness and accuracy is a never-ending story with always more to be done.
Conscious of the growing need for relevant and reliable information, in October 2021, the PCAOB published its Staff Guidance – Insights for Auditors Evaluating Relevance and Reliability of Audit Evidence Obtained From External Sources. This guidance is predicated on AS 1105.06 – Audit Evidence which states that “Appropriateness is the measure of the quality of audit evidence, i.e., its relevance and reliability. To be appropriate, audit evidence must be both relevant and reliable in providing support for the conclusions on which the auditor's opinion is based.”
In Part 1 of our series on this topic, we wrote about important considerations in evaluating relevance and reliability. While ALL audit evidence must be both relevant and reliable, there is an important distinction between audit evidence that emanates from sources external to the company and information produced by the entity (also known as IPE). The auditing standards are more stringent when considering the relevance and reliability of IPE. AS 1105.10 goes on to specifically require that:
When using information produced by the company as audit evidence, the auditor should evaluate whether the information is sufficient and appropriate for purposes of the audit by performing procedures to:
- Test the accuracy and completeness of the information, or test the controls over the accuracy and completeness of that information; and
- Evaluate whether the information is sufficiently precise and detailed for purposes of the audit.
Despite the continued focus on C&A, teams are still struggling to sufficiently test IPE for C&A. In its most recent inspection observations, the PCAOB identified issues surrounding C&A of information both within the realm of ICFR as well as in substantive testing. Let’s dig into the two potential testing approaches: controls and substantive.
Testing Controls Over C&A
Given the fact that more and more information is being generated electronically, it’s becoming increasingly difficult to test C&A without testing controls. When applying a controls approach, remember that controls are first and foremost the responsibility of management. In fact, as part of the management representation letter, management must assert to its “responsibility for the fair presentation in the financial statements of financial position, results of operations, and cash flows in conformity with generally accepted accounting principles” (AS 2805.06). As well, for controls, management must state “that management did not use the auditor's procedures performed during the audits of internal control over financial reporting or the financial statements as part of the basis for management's assessment of the effectiveness of internal control over financial reporting” (AS 2201.75).
With that in mind, when testing controls over the C&A of information, it’s important to understand the information pipeline. While the end result may be a simple report, controls over completeness and accuracy require various controls from origination to reporting. As the old adage says, “Garbage in, garbage out.”
What data matters
Before you start tracing the data through the process, you first need to think about the key data that matters. Too often, reports are considered in their entirety, but there is always specific data that is critical to what eventually shows up on the financial statements. For example, for revenue you likely need to know the product sold, the revenue recognition date (e.g., the shipment date), and the purchase price. Additional information may be critical to certain controls over that data (e.g., management review controls), but when understanding the information pipeline, it helps to narrow it down to the data that really matters, especially when testing the reliability of information used in the audit. Too often, a report is tested without addressing all the key data.
Data origination
Consider next the source of the information; where does it originate? Upstream controls are critical as they govern the initial input of information into a system. For instance, when testing an inventory aging report, the input of the initial purchase date is a critical component for calculating the aging of the inventory. These upstream controls typically involve business-process-level controls.
Systems, Interfaces and Data Warehouses
Once source data is input into a system, it’s important to understand the flow of information from input to the reporting system. Sometimes this might be one all-encompassing system, but often, there are multiple interfaces, systems and even sometimes data warehouses involved prior to the generation of a report.
For each system involved in the pipeline, management should have robust information technology general controls (ITGCs) in place to ensure the reliability of the systems. ITGCs themselves don’t provide direct assurance over data or reports, but rather, they are designed to ensure that systems are safeguarded from inappropriate access and inappropriate changes throughout the audit period. In other words, if effective ITGCs are in place, then the system should operate as designed, but that means the auditor still needs to validate the design (i.e. testing the report – we’ll get to that in the next section).
For each interface, it’s important to test the relevant controls that ensure complete and accurate transference of data between systems. Most interfaces involve processing and filtering of data. In these cases, the engagement team will need to validate what filters are in place, whether they are appropriate and how data transmission exceptions are resolved.
Finally, if there are data warehouses involved, similar to systems, it’s critical to understand the controls in place that safeguard the information while it resides in the data warehouse. Data warehouses often store data in different structures than the source system to aid in reporting. How is management comfortable that the data is completely and accurately transferred to the data warehouse?
Reports and queries
The final step in the process is understanding how the information is gathered and aggregated into the report or query. The nature of the controls and testing to be done will always depend on the type of report and output. Standard reports that generate PDFs pose less risk, for example, than customized queries that produce an Excel output.
Regardless of the nature of the report, there is always some testing to be performed. If the report is a simple standard report (e.g., a listing of information), the testing may be limited to understanding the change management controls around the report and validating that the report has in fact not been customized. If, on the other hand, it is a customized query, the engagement team should consider:
- What is the report configuration? Teams will need to review and test the report parameters. Does the report run automatically based on a schedule? Or is it run ad-hoc? Are there pre-set report parameters or does the user have the ability to dictate what information is included in the report (i.e. date ranges, business units, GL accounts, customer accounts, etc.)?
- What data fields are pulled in for the report? Is there any processing / manipulation / aggregation / synthesis of data as a result of the reporting functionality? If so, we need to understand what processing there is and validate it.
- What controls are in place to govern change management around reports? Often, companies will write specific standardized query scripts where the user has to input specific data parameters. In addition to understanding those parameters, teams need to understand and test the change management controls that safeguard the script.
- What controls are in place to protect data / queries once reports have been produced? This is especially important for dynamic files that may either a) be modified by human review subsequent to report generation or b) are linked and updated either real-time or on regular frequencies. Because of the susceptibility to change, for any report that is generated in Excel, the team should observe the generation of the report, including transference to the auditor directly.
These are just some of the questions to consider. The reality is that there is no one way to test a report. The testing approach will always be unique to the company’s systems and controls and will be impacted by the nature of the information being tested.
Substantively Testing C&A
Alternatively, if engagement teams opt not to test controls over C&A, they may substantively test the C&A of IPE. When performing substantive testing without controls reliance however, it’s important to remember that teams must test the C&A each and every time they obtain a report.
Completeness
There are various ways to substantively test the completeness of information. One approach is to reconcile a report with an alternative report / source. For instance, an engagement team could reconcile an investments purchases and sales journal with bank/broker statements to ensure completeness. Some reports, such as the journal entry listing could be tested for completeness by performing an account rollforward where the opening trial balances (i.e. prior year balances) are rolled forward to the current period balances using the journal entry listing. Alternatively, some reports such as cash disbursement journals may have sequential numbering which could help validate completeness. The design of the completeness test will always depend on the nature of the information. Finally, the completeness of some reports may be validated through detailed substantive testing such as floor-to-sheet testing for physical inventory. When testing completeness of a report, it is important to distinguish between the testing performed for completeness as a financial statement assertion and completeness as an information processing objective. Sometimes, the testing can accomplish both, such as the floor-to-sheet testing for physical inventory, but often, the completeness of a report does not directly translate into the same completeness for a financial statement assertion.
Accuracy
Accuracy is less difficult a concept to grasp as this is often the nature of a test of detail. In other words, accuracy is typically validated through taking a sample of items on a report and reconciling it back to audit evidence. When testing accuracy, it’s important to test the accuracy of all key data fields.
Other Considerations
Though the standard has not changed, with experience and learning, the industry has become more conscious of C&A. Though everything falls under the realm of either testing controls or substantively testing C&A, some specific considerations include:
- ICFR audits: In an ICFR audit, all information that is used in controls must specifically have internal controls that ensure completeness and accuracy. For the substantive portion of the audit (and/or for financial statement audits only), engagement teams have the choice of either identifying controls over C&A or substantively testing information for C&A.
- Data fields: often teams will review information directly within the system through remote-screen sharing with a client or through direct observation at a client’s desk. This is still “data” and although it is not a typical report, the engagement team should consider the controls in place from origination to observation of the data in the system.
- ITGCs: If ITGCs fail, teams will often have no choice but to test IPE through substantive means as ITGCs are the foundation for all other controls involving systems.
- Service Providers: Although service providers are technically “external” to a company, service providers are viewed as an extension of the company’s system of internal control and thus reports from service providers are still considered IPE. This means that teams must either identify and test controls over IPE or substantively test the IPE. If teams obtain SSAE 18 reports (i.e. SOC 1) over service providers, teams cannot automatically assume that C&A of reports are covered by the SSAE 18 report (most often the reports are not covered, but rather, are mentioned in the complimentary user entity controls). Teams must specifically review the SSAE 18 report and ensure the control objectives adequately address C&A of reporting.
- FS and IT Auditors: Most teams consider IPE testing to fall within the realm of IT auditors. Certainly, there are numerous IT controls such as ITGCs and automated controls that require the assistance of IT auditors, but teams must begin to shift perspectives and understand that IPE testing is a collaborative effort between FS and IT auditors. FS auditors often understand the purpose / use of a report (including the key data) and thus can understand the risk profile linked to a report. When testing controls, it is not just IT controls, but rather a suite of controls including business-process-level controls. We always encourage teams to engage IT auditors to assist with the testing as IT auditors have the competencies to test system controls, but it is not solely an IT auditor responsibility; integrate the testing!
Key Takeways
- C&A continues to be a recurring finding in the audit industry. Information produced by the entity (including service providers) must be tested for C&A either by testing controls or through substantive testing.
- When testing controls, consider the entire information pipeline and identify the key controls from origination through to reporting. This includes controls over the source inputs, systems, interfaces, and data warehouses.
- All reports (yes, even standard reports) require some testing to understand the report configuration, controls around change management and data integrity once the report is generated.
- While controls can be burdensome, substantive testing of C&A can be equally challenging. While there are multiple ways to test completeness and accuracy, a key consideration here is that each and every report must be substantively tested each and every time it is run/generated from a system.
- Reports from service providers are still considered IPE even though it is technically “external” from the company.
- IPE testing is NOT just an IT auditor’s responsibility. There are IT components for which an FS auditor may not be suitably equipped to test, but IPE testing is a collaborative effort and audit teams need to be integrating FS and IT auditor knowledge, risks, and competencies to design an effective and efficient approach to testing IPE.









