
Jane Ohadike
Lagos — Artificial intelligence (AI) is rapidly transforming the accounting and finance professions to such an extent, that its benefits, application and trustworthiness have become an ongoing debate in professional, academic and societal circles.
The topic is a constant subject of discussion because of the technology’s rapid advancement and its widespread integration into various aspects of our lives. Early adopters are grasping the significant benefits and opportunities while others are more cautious, skeptical and even afraid.
One thing is certain, AI is here to stay and will become fully integrated into our lives in ways we have not yet imagined. From improving productivity in data analysis and financial reporting to enhancing risk management and forecasting, AI is steadily integrating into core functions, enabling finance professionals to work more efficiently and strategically.
But the future of AI in finance, accounting and auditing is not only about new technologies; it’s about reimagining the roles of finance professionals in focusing on analysis and interpretation, strategy and high-level decision-making.
It heralds unprecedented advantages, optimising operations, and enabling data-driven decision-making. It streamlines transactions, enhances risk assessment, and tailors personalised investment strategies, revolutionising the industry’s efficiency.
While AI brings speed, scale and new possibilities it also brings complexity. The systems behind AI are often opaque and their decisions hard to trace. Thus, it has introduced new dynamics to the traditional trusted processes that underpin the accountancy profession. The confidence to be able to trust what AI says is vital for public interest.
Furthermore, responsible innovation needs a balance between its potential and ethical considerations. This is achieved through risk management, transparency, and regulatory compliance to maintain financial integrity and consumer trust.
Data privacy and security concerns also loom large in the minds of accounting professionals considering AI adoption. AI systems often require access to vast amounts of sensitive financial data; thus, ensuring the security and privacy of information is paramount. This concern is particularly acute in the regulated sectors, including financial services, where regulatory scrutiny is intense, and the consequences of data breaches can be severe.
The effective mitigation of ethical threats from AI requires coordination. Responsibility cannot fall solely on individual users, given the complexity and potentially systemic nature of risks.
This integration challenge is not merely a technical issue, but often requires a rethinking of established processes and workflows. Having an effective governance framework that ensures adherence to regulations in a holistic and interconnected way is essential for AI compliance.
AI governance involves policies, procedures, and ethical principles that control how AI systems are developed, used, and managed. Governance frameworks are established clear policies and accountability mechanisms.
However, the establishment of formal AI governance structures for AI is still in its infancy in many organisations, and many are searching for clarity on regulations and standards to help guide their approach. Simultaneously, there’s growing recognition in many jurisdictions that organisations will need to define their own boundaries and risk appetites to address gaps or delays in implementation.
When it comes to the adequacy of risk and control measures for AI, specifically, there are clearly outstanding concerns within organisations about their ability to understand and tackle the risks effectively.
The extent of the challenge includes confirmation bias which leads individuals to favour information confirming their existing beliefs, thus creating “blind spots” by causing a focus on known and quantifiable risks. This is compounded by misalignment and confidence issues, making it difficult to detect emerging risks, particularly those related to AI.
The core challenge is that current risk and control frameworks are not adequately equipped to address the unknown and the rapidly evolving risks posed by advanced technologies like AI.
To overcome this a more open collaborative approach across individuals, organisations, regulators, policymakers and professions is needed to address the ethical challenges of AI. There is also a growing question around alignment between organisations’ approach to risk management and their purpose. Specific risks related to trust – including impacts on reputation because of poor alignment – appear to be gaps in current thinking.
For organisations to overcome these challenges and properly address potential risks, they need to be thinking about how they can create a basis for shared knowledge and understanding.
While AI is reshaping the way finance professionals work, but it does not change the foundations of our profession: our integrity, objectivity and accountability. Strengthening AI literacy ensures we can apply these principles with confidence and safeguard public trust. Furthermore, finance professionals should be experienced in asking questions to identify AI ethical challenges.
Ensuring ethics and responsibility within the AI landscape is critical as there can be significant knock-on impacts and negative consequences can scale quickly, if errors and inaccuracies are not caught early. Therefore, it is the responsibility of every employee.
But to develop and nurture a sense of collective responsibility, it is necessary to have the right policies and training in place.
To this end leadership plays an important role by exercising informed judgement, steering clear of tech-washing, and embedding ethical AI usage within regulatory frameworks. Core components of this governance include robust data management, vigilant oversight and a well-informed understanding of the many AI vendors.
As we prepare for the Africa Members Convention in December 2025, to be held in Kenya, AI – its governance and ethics and the critical role played by finance professionals – is one of the key agendas for discussion.
By equipping teams with strong AI skills and knowledge organisations can navigate this rapidly evolving space responsibly, in making AI adoption strategic and sustainable.
*Jane Ohadike leads ACCA’s Public Affairs agenda across Africa, driving collaboration with regional institutions, regulators, and experts.


