0% found this document useful (0 votes)
90 views

What Are The Classes of Input Control? Explain Each

The document discusses different classes of input controls and approaches to auditing computer applications. It describes six classes of input controls: source document controls, data coding controls, batch controls, validation controls, input error correction, and generalized data input systems. It then explains the common black-box and white-box approaches used in auditing computer applications, with black-box testing the functional requirements without detailed knowledge of internal logic, while white-box uses an in-depth understanding of internal logic to directly test application controls.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views

What Are The Classes of Input Control? Explain Each

The document discusses different classes of input controls and approaches to auditing computer applications. It describes six classes of input controls: source document controls, data coding controls, batch controls, validation controls, input error correction, and generalized data input systems. It then explains the common black-box and white-box approaches used in auditing computer applications, with black-box testing the functional requirements without detailed knowledge of internal logic, while white-box uses an in-depth understanding of internal logic to directly test application controls.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

1. What are the classes of input control? Explain each.

Classes of Input Control For presentation convenience and to provide


structure to this discussion, we have divided input controls into the following
broad classes:
• Source document controls
• Data coding controls
• Batch controls
• Validation controls
• Input error correction
• Generalized data input systems

These control classes are not mutually exclusive divisions. Some control
techniques that we shall examine could fit logically into more than one class.
 Source Document Controls.
Careful control must be exercised over physical source documents in systems
that use them to initiate transactions. Source document fraud can be used to
remove assets from the organization. For example, an individual with access
to purchase orders and receiving reports could fabricate a purchase
transaction to a nonexistent supplier. If these documents are entered into the
data processing stream, along with a fabricated vendor’s invoice, the system
could process these documents as if a legitimate transaction had taken place.
In the absence of other compensating controls to detect this type of fraud, the
system would create an account payable and subsequently write a check in
payment.

 Data Coding Controls


Coding controls are checks on the integrity of data codes used in processing.
A customer’s account number, an inventory item number, and a chart of
accounts number are all examples of data codes. Three types of errors can
corrupt data codes and cause processing errors: transcription errors, single
transposition errors, and multiple transposition errors. Transcription errors fall
into three classes:
• Addition errors occur when an extra digit or character is added to the code.
For example, inventory item number 83276 is recorded as 832766.
• Truncation errors occur when a digit or character is removed from the end of
a code. In this type of error, the inventory item above would be recorded as
8327.
• Substitution errors are the replacement of one digit in a code with another.
For example, code number 83276 is recorded as 83266.

 Batch Controls
Batch controls are an effective method of managing high volumes of
transaction data through a system. The objective of batch control is to
reconcile output produced by the system with the input originally entered into
the system. This provides assurance that: • All records in the batch are
processed. • No records are processed more than once. • An audit trail of
transactions is created from input through processing to the output stage of
the system. Batch control is not exclusively an input control technique.
Controlling the batch continues through all phases of the system. We are
treating this topic here because batch control is initiated at the input stage.
Achieving batch control objectives requires grouping similar types of input
transactions (such as sales orders) together in batches and then controlling
the batches throughout data processing. Two documents are used to
accomplish this task: a batch transmittal sheet and a batch control log. Figure
7.1 shows an example of a batch transmittal sheet. The batch transmittal
sheet captures relevant information such as the following about the batch.
• A unique batch number
• A batch date
• A transaction code (indicating the type of transactions, such as a sales order
or cash receipt)
• The number of records in the batch (record count).
• The total dollar value of a financial field (batch control total).
• The total of a unique non-financial field (hash total).

 Validation Controls
Input validation controls are intended to detect errors in transaction data
before the data are processed. Validation procedures are most effective when
they are performed as close to the source of the transaction as possible.
However, depending on the type of technology in use, input validation may
occur at various points in the system. For example, some validation
procedures require making references against the current master file.
Systems using real-time processing or batch processing with direct access
master files can validate data at the input stage. Figure 7.4(a) and (b)
illustrate these techniques.
If the system uses batch processing with sequential files, the
transaction records being validated must first be sorted in the same order as
the master file. Validating at the data input stage in this case may require
considerable additional processing. Therefore, as a practical matter, each
processing module prior to updating the master file record performs some
validation procedures. This approach is shown in Figure 7.5.
The problem with this technique is that a transaction may be partially
processed before data errors are detected. Dealing with a partially complete
transaction will require special error-handling procedures. We shall discuss
error-handling controls later in this section.
There are three levels of input validation controls:
1. Field interrogation
2. Record interrogation
3. File interrogation

 Input Error Correction


When errors are detected in a batch, they must be corrected and the records
resubmitted for reprocessing. This must be a controlled process to ensure that
errors are dealt with completely and correctly. There are three common error
handling techniques: (1) correct immediately, (2) create an error file, and (3)
reject the entire batch.
Correct Immediately. If the system is using the direct data validation approach
(refer to 7-4(a) and (b)), error detection and correction can also take place
during data entry. Upon detecting a keystroke error or an illogical relationship,
the system should halt the data entry procedure until the user corrects the
error.
Create an Error File. When delayed validation is being used, such as in batch
systems with sequential files, individual errors should be flagged to prevent
them from being processed. At the end of the validation procedure, the
records flagged as errors are removed from the batch and placed in a
temporary error holding file until the errors can be investigated.
Reject the Batch. Some forms of errors are associated with the entire batch
and are not clearly attributable to individual records. An example of this type
of error is an imbalance in a batch control total. Assume that the transmittal
sheet for a batch of sales orders shows a total sales value of $122,674.87, but
the data input procedure calculated a sales total of only $121,454.32. What
has caused this? Is the problem a missing or changed record? Or did the data
control clerk incorrectly calculate the batch control total? The most effective
solution in this case is to cease processing and return the entire batch to data
control to evaluate, correct, and resubmit.

 Generalized Data Input Systems


To achieve a high degree of control and standardization over input validation
procedures, some organizations employ a generalized data input system
(GDIS). This technique includes centralized procedures to manage the data
input for all of the organization’s transaction processing systems. The GDIS
approach has three advantages. First, it improves control by having one
common system perform all data validation. Second, GDIS ensures that each
AIS application applies a consistent standard for data validation. Third, GDIS
improves systems development efficiency. Given the high degree of
commonality in input validation requirements for AIS applications, a GDIS
eliminates the need to recreate redundant routines for each new application.
Figure 7.9 shows the primary features of this technique. A GDIS has five
major components:
1. Generalized validation module
2. Validated data file
3. Error file
4. Error reports
5. Transaction log
2. What are the common approaches used in auditing computer
applications? Explain each.
This section examines several techniques for auditing computer applications.
Control testing techniques provide information about the accuracy and
completeness of an application’s processes. These tests follow two general
approaches: (1) the black box (around the computer) approach and (2) the
white box (through the computer) approach. We first examine the black box
approach and then review several white box testing techniques.
 Black-Box Approach
Auditors testing with the black-box approach do not rely on a detailed
knowledge of the application’s internal logic. Instead, they seek to understand
the functional characteristics. of the application by analyzing flowcharts and
interviewing knowledgeable personnel in the client’s organization. With an
understanding of what the application is supposed to do, the auditor tests the
application by reconciling production input transactions processed by the
application with output results. The output results are analyzed to verify the
application’s compliance with its functional requirements.
The advantage of the black-box approach is that the application need
not be removed from service and tested directly. This approach is feasible for
testing applications that are relatively simple. However, complex applications
—those that receive input from many sources, perform a variety of operations,
or produce multiple outputs—require a more focused testing approach to
provide the auditor with evidence of application integrity.
 White-Box Approach
The white-box approach relies on an in-depth understanding of the internal
logic of the application being tested. The white-box approach includes several
techniques for testing application logic directly. These techniques use small
numbers of specially created test transactions to verify specific aspects of an
application’s logic and controls. In this way, auditors are able to conduct
precise tests, with known variables, and obtain results that they can compare
against objectively calculated results. Some of the more common types of
tests of controls include the following:
• Authenticity tests, which verify that an individual, a programmed
procedure, or a message (such as an EDI transmission) attempting to access
a system is authentic. Authenticity controls include user IDs, passwords, valid
vendor codes, and authority tables.
• Accuracy tests, which ensure that the system processes only data values
that conform to specified tolerances. Examples include range tests, field tests,
and limit tests.
• Completeness tests, which identify missing data within a single record and
entire records missing from a batch. The types of tests performed are field
tests, record sequence tests, hash totals, and control totals.
• Redundancy tests, which determine that an application processes each
record only once. Redundancy controls include the reconciliation of batch
totals, record counts, hash totals, and financial control totals.
• Access tests, which ensure that the application prevents authorized users
from unauthorized access to data. Access controls include passwords,
authority tables, userdefined procedures, data encryption, and inference
controls.
• Audit trail tests, which ensure that the application creates an adequate
audit trail. This includes evidence that the application records all transactions
in a transaction log, posts data values to the appropriate accounts, produces
complete transaction listings, and generates error files and reports for all
exceptions.
• Rounding error tests, which verify the correctness of rounding procedures.
Rounding errors occur in accounting information when the level of precision
used in the calculation is greater than that used in the reporting. For example,
interest calculations on bank account balances may have a precision of five
decimal places, whereas only two decimal places are needed to report
balances. If the remaining three decimal places are simply dropped, the total
interest calculated for the total number of accounts may not equal the sum of
the individual calculations.
• Access tests, which ensure that the application prevents authorized users
from unauthorized access to data. Access controls include passwords,
authority tables, userdefined procedures, data encryption, and inference
controls.
• Audit trail tests, which ensure that the application creates an adequate
audit trail.This includes evidence that the application records all transactions
in a transaction log, posts data values to the appropriate accounts, produces
complete transaction listings, and generates error files and reports for all
exceptions.
• Rounding error tests, which verify the correctness of rounding procedures.
Rounding errors occur in accounting information when the level of precision
used in the calculation is greater than that used in the reporting. For example,
interest calculations on bank account balances may have a precision of five
decimal places, whereas only two decimal places are needed to report
balances. If the remaining three decimal places are simply dropped, the total
interest calculated for the total number of accounts may not equal the sum of
the individual calculations.

3. What are the five major components of GDIS? Explain each.


A GDIS has five major components:
1.Generalized validation module
2.Validated data file
3.Error file
4.Error reports
5. Transaction log

Generalized Validation Module. The generalized validation module (GVM)


performs standard validation routines that are common to many different
applications. These routines are customized to an individual application’s
needs through parameters that specify the program’s specific requirements.
For example, the GVM may apply arrange check to the HOURLY RATE field
of payroll records. The limits of the range are 6 dollars and 15 dollars. The
range test is the generalized procedure; the dollar limits are the parameters
that customize this procedure. The validation procedures for some
applications may be so unique as to defy a general solution. To meet the
goals of the generalized data input system, the GVM must be flexible enough
to permit special user-defined procedures for unique applications. These
procedures are stored, along with generalized procedures, and invoked by the
GVM as needed.
Validated Data File. The input data that are validated by the GVM are stored
on a validated data file. This is a temporary holding file through which
validated transactions flow to their respective applications. The file is
analogous to a tank of water whose level is constantly changing, as it is filled
from the top by the GVM and emptied from the bottom by applications.
Error File. The error file in the GDIS plays the same role as a traditional error
file.Error records detected during validation are stored in the file, corrected,
and then resubmitted to the GVM.
Error Reports. Standardized error reports are distributed to users to facilitate
error correction. For example, if the HOURLY RATE field in a payroll record
fails a range check, the error report will display an error message stating the
problem so. The report will also present the contents of the failed record,
along with the acceptable range limits taken from the parameters.
Transaction Log. The transaction log is a permanent record of all validated
transactions. From an accounting records point of view, the transaction log is
equivalent to the journal and is an important element in the audit trail.
However, only successful transactions (those that will be completely
processed) should be entered in the journal. If a transaction is to undergo
additional validation testing during the processing phase (which could result in
its rejection), it should be entered in the transaction log only after it is
completely validated. This issue is discussed further in the next section under
Audit Trail Controls.

You might also like