Maximizing AI Investments While Maintaining Essential Controls Hinges On The CFO
As organizations race to achieve outsized benefits from artificial intelligence (AI), CFOs must address a frequently overlooked driver of optimal AI returns: internal control structures.
Maximizing AI Investments While Maintaining Essential Controls Hinges on the CFO
getty
AI risk management is not just about avoiding data security and privacy breakdowns, intellectual property (IP) exposure and reputational damage. It’s also about maximizing the upside of AI investments to achieve expected returns. AI-related internal controls foster stakeholder trust and equip the organization with the traction required to refine, scale or sunset AI tools quickly, according to the value they deliver relative to the risks they create or mitigate. For example, as AI deployments eliminate job functions and create new ones, critical controls may be compromised, thus weakening the control structure. Conversely, AI use cases can be designed for tasks like searching for duplicate payments, thereby mitigating certain risks.
What does this mean? The expertise and experience of CFOs and their finance teams with internal controls and enterprise risk management (ERM) frameworks, financial planning and analysis (FP&A), data sourcing, data privacy and security, and investment prioritization make them ideal advocates for sustaining the control structure as they collaborate with their C-suite colleagues on AI strategy, investment, deployment and value assessment. This balancing act involves establishing AI governance (a table-stakes measure at this juncture), adjusting governance measures as AI implementations and use cases progress, and integrating these mechanisms with traditional ERM and control frameworks (think COSO), with intention to preserve the enterprise’s essential internal controls.
Risks directly associated with AI deployments have been written about ad nauseum. Yet secondary impacts—especially those associated with AI’s transformative effects on jobs and roles—often receive short shrift. Among the many concerns, loss of institutional knowledge and disruptions to controls over segregation of duties due to AI-driven workforce changes loom large as AI agents become an integral part of the workforce.
In a survey of 950 global finance leaders, 45% of respondents reported their companies are employing generative or agentic AI tools without a defined strategy. This suggests that roughly half of organizations forge ahead with AI deployments while assuming their existing control structures will remain sufficient after the resulting AI-driven changes to jobs, roles and processes.
Control consequences
CFOs should advocate for considering implications to the control structure during the planning of AI implementations and prior to effecting the necessary organizational changes. As past experiences with major reallocations of roles and responsibilities have demonstrated time and again, the following control impacts often arise.
MORE FOR YOU
- Overreliance on automated controls: AI implementations may create confusion over the responsibilities of AI agents, increasing the likelihood of key controls falling through the cracks. If automated controls fail due to programming errors or poor data governance, issues may go undetected without knowledgeable humans-in-the-loop (HITL) who are empowered to monitor performance and intervene.
- Disruption of segregation of duties: Business and digital transformation initiatives, such as AI-driven programs, often consolidate roles. Having fewer people responsible for more tasks can undermine the principle of segregating duties around authorizing, executing, settling and recording transactions. For example, if AI is used in the process of administering either payroll or accounts payable, it would seem logical to have an HITL toward the very end of the process, at the very least, before cash exits the organization through check or transfer.
- Loss of institutional knowledge: Shifting employees to other roles or job cuts of any kind can lead to the loss of employees who possess in-depth knowledge of control activities, specific risk areas and critical regulatory compliance requirements. For processes highly dependent on experienced talent, this change creates gaps in the execution of risk management and control activities.
- Reduced monitoring and oversight: Changes in the organization affect control processes as well as teams responsible for monitoring, auditing and reviewing control effectiveness. Less-frequent, less-thorough reviews heighten the risk of missing control weaknesses that lead to process failures and deviations.
- Increased workloads and stress on staff: Amid organizational transformation and change, employees must adapt quickly to new systems while handling additional responsibilities. Fatigue and stress can lead to mistakes, cutting corners with established controls, and reducing vigilance in key oversight functions.
- Gaps in training and change management: Rapid AI implementation and concurrent job changes and shifts can leave staff inadequately trained on new processes and controls. Inexperienced team members may misuse new systems or fail to execute manual controls properly.
- Change in the control environment and culture: Any changes in staff, whether through reassignments or reductions, can negatively affect morale and organizational culture around compliance and risk management. Employees may be less motivated to follow procedures or raise concerns, especially if they fear further reductions. As AI systems scale, increased operational complexity may overwhelm existing control frameworks.
These and other consequences will be familiar to finance leaders who managed organizational responses to the global financial crisis, the pandemic and major technology transformations. However, as boards and leaders increasingly are recognizing, AI implementations are unique given the unprecedented velocity, pace and magnitude of the workforce and process changes they may trigger.
The National Association of Corporate Directors’ (NACD’s) guidance on implementing AI governance encourages corporate directors to request C-suite leaders to incorporate AI-specific risks into ERM frameworks while addressing new AI risks related to unpredictable model performance over time, model opacity and explainability gaps, training data contamination, and unclear IP ownership. The guidance also reports that only 21% of boards have collaborated with management to determine where AI is in use in their companies, suggesting a call to action for directors to increase their visibility into the organization’s AI use and related impact on controls.
To that end, finance leaders should work with their C-suite colleagues to ensure that AI governance structures address the proper use of generative AI, oversight of and accountability for agentic AI performance (including training processes), data security controls, data privacy compliance, IP protection, bias prevention measures, responsible use protocols, intervention protocols, success metrics, human involvement considerations, and other ethical guardrails.
With respect to human involvement, it can either be interaction at critical decision points (HITL) or monitoring the system’s performance and intervening only when necessary (human-on-the-loop). Armed with this knowledge, CFOs should be able to respond to the board’s questions on AI governance integration with ERM and related control structures.
Internal control advocacy actions
When planning to respond to AI-driven organizational and workforce changes while preventing internal controls from becoming misaligned with newly designed (or obsolete) workflows, CFOs should advocate that the organization undertake the following actions:
- Conduct risk assessments to evaluate which controls are likely affected and identify the new risks introduced by AI, automation and workforce changes. These assessments may be needed before, during and after changes are implemented to identify control vulnerabilities. They may also come in handy in assessing the ROI on AI usage once the clear impacts are known.
- Reevaluate and, when necessary, redesign control structures to reflect new staffing realities (e.g., created by process automation and consolidation of oversight functions). This effort involves testing newly established controls in critical areas before go-live; supporting automated monitoring, which should be implemented whenever possible, with periodic human reviews; and documenting all changes to key controls to facilitate audits, ongoing risk assessments and reporting.
- Update control frameworks to align with new AI-driven processes to ensure that appropriate segregation of duties and oversight remain intact.
- Train and upskill remaining employees in both the new AI technologies and revised control procedures to prepare them for changes in their roles and responsibilities.
- Communicate the importance of controls and ethical values while reinforcing the organization’s compliance culture as technology evolves.
- Monitor and test key controls during the change process to ensure effectiveness. Finance leaders also should consider increasing the testing frequency on selected controls depending on their importance.
Bottom line, business transformation and the resulting workforce changes, along with fundamental role and process redesigns stemming from AI implementations, can profoundly affect the effective operation of established internal controls, including AI risk management mechanisms.
While 85% of organizations indicate that their AI investments have met or exceeded expectations, according to Protiviti’s inaugural AI Pulse Survey, it is uncertain whether those expectations extend to post-AI-implementation internal control structures. If the importance of internal controls is not emphasized in the rush to deploy AI, it may be game over from a control effectiveness standpoint. CFOs who emphasize this message clearly to their C-suite colleagues and to the board will help maximize the upside of AI investments over the long haul.