Technology keeps evolving, and accompanying these emerging technologies are the associated risks which if adequate security and control is not in place, can erode the overall benefits.
There is a lot of buzz around Artificial intelligence as they currently form the basis for all computer learning
and are considered the future of all complex decision-making. They have various use cases such as the
detection of fraud, Virtual customer assistance, Natural Language processing, automation of business
processes, etc., hence the need to ensure they are audited to confirm that the controls employed by
organizations are well-implemented and operating effectively at all times.
Similar to auditing other technologies, it is best to focus on a risk-based approach when auditing AI by
starting with defining the scope and objectives of the audit, as well as the risks the use of AI poses to the
business. The identified risks associated with AI and the related controls in place should then be
documented in the risk and control matrix (RCM).
This article discusses risk areas which need to be explored by auditors while venturing into auditing Artificial Intelligence:
- Data Related Risk: There is a risk of poor data quality which could include erroneous or
unsuitable data, incomplete and non-accurate data, wrong data context, stale data, etc. which will
limit the learning capability of the system as well as adversely impact the capacity of the system.
These deficiencies may possibly result in a failure to achieve the planned objectives and /or give
rise to flawed or poor forecasts.
Also, there is a learning limitation risk as Artificial intelligence systems in contrast to humans are
deficient in context and judgment for many of the situations in which they are used. An AI/ML
system’s effectiveness is often dependent on the data used to train it and the many situations that
were taken into account. Most of the time, it is not possible to train the AI system on every
scenario and piece of information. Lack of context, poor judgment, and general learning
constraints could significantly influence risk-based review and discussions of strategic
deployment. - Testing and Trust: As an emerging technology, the degree of evolution of the AI system over time
may vary depending on its implementation and use case. Some types of AI could produce
complications that might accumulate, change, or get worse with time while ML models may be
sensitive to environmental events which may possibly affect how well they function. The following
are inherent because they are new technology:
a. Lack of Transparency: As a new technology, AI systems may have trust concerns as a
result of public awareness of the technology as well as a lack of basic comprehension.
There are lots of misconceptions about AI systems that it is challenging to fully evaluate
systems that are difficult to comprehend.
Since making predictions is often the main goal of AI systems, their algorithms are
sometimes so complicated that even their developers struggle to fully understand how the
factors combined lead to the final prediction. Due to this lack of transparency, certain
algorithms are referred to as “black boxes,” and legislative authorities are now starting to
look at what safeguards could be necessary. Also, companies are at the risk of being
unable to provide an explanation if a decision was made based on the forecast of AI/ML,
b. Incorrect Output: Compared to traditional systems, testing and validating AI/ML systems
may provide issues since certain AI/ML systems are inherently dynamic, prone to change
over time, and consequently, may modify their outputs. It might not be feasible to test for
all situations, permutations, and combinations of the data that are available, which might
result in coverage gaps. Each system and its applications may have a different impact on
how serious these gaps are.
c. Bias: Artificial intelligence poses one of the more dangerous concerns of bias in decision-
making systems. The dataset used for training AI systems may contain biases or
assumptions depending on how it was assembled. AI systems learn from this dataset.
Decisions made by the system might potentially be affected by these biases.
AI systems could potentially amplify risks relating to unfairly biased outcomes or
discrimination. Furthermore, AI-driven unfairly biased outcomes could have privacy
compliance implications, constitute regulatory, litigation, and reputational risk, impact
operations and result in customer dissatisfaction and attrition.
- Potential Artificial Intelligence/Machine Learning attacks: Based on research carried out on AI
and ML over the years, there are known potential attacks against AI/ML systems which are
grouped into three categories depending on the use of the AI system: data privacy, data poisoning,
and model extraction.
a. Data Privacy Attacks: In data privacy attacks, the privacy of the data may be jeopardized
since an attacker may be able to deduce the data set that was used to train the model. By
examining the parameters or querying the model, an adversary may be able to deduce
private information from the training data set. Model inversion attacks and membership
inference attacks are two important forms of data privacy threats.
The risks of exposing consumer or employee data are inherent since the AI algorithm
depends on data, and it will become difficult to protect personal privacy. Since many
legislative bodies are now implementing legislation that limits how personal data may be
used, the impact of data leaks or breaches can seriously harm a company’s
reputation/brand and perhaps violate the law.
b. Training Data Poisoning: This is the contamination of data used to train the AI/ML
system which can negatively affect its learning process or output. Data poisoning could
possibly increase the error rate of the AI/ML system or potentially influence the
retraining process. Some of the attacks in this category are known as “label-flipping” and
“frog-boil” attacks.
c. Model Extraction: An attacker attempts to steal the model itself in this attack. AI/ML
model extraction attacks may be the most impactful since the stolen model might be used
as a “tool” to introduce new risks. According to research on these attacks, extraction could
take place without requiring a high degree of technological complexity and could be
completed quickly if the model could be queried indefinitely.
- Compliance issues: The influence of AI implementations on current corporate rules should be
examined as AI systems grow in businesses. Regulatory bodies are becoming more interested in
AI deployment in different sectors especially the finance industry. Regulators from throughout
the world have organized working groups to explore the supervisory difficulties provided by
developing technologies, which has resulted in the release of recommendations, white papers, and
surveys. This interest stems from the realization that AI/ML presents new issues, and readers
should consider how legislation may affect the usage and governance of AI/ML.
Having Known all these, the onus lies with the auditor to understand the design of the controls that have
been put in place to mitigate these risks and test their implementation as well as the operating
effectiveness of the controls.