Tech + Science

Sponsored Content

Ensuring responsible use of AI systems through Auditing

Baker Tilly continuously evolves its auditing methodologies while building solutions for providing assurance on AI systems.

As artificial intelligence (AI) systems become increasingly integral to core business models across various sectors, including finance, healthcare, technology and human resources, ensuring their transparency, fairness, integrity and reliability is paramount. Auditing AI has emerged as a key mechanism for holding AI systems accountable, mitigating risks and ensuring compliance with ethical and regulatory standards, such as the European Union’s Artificial Intelligence Act.

The integration of AI capabilities into the auditing process offers significant advantages. A recent survey by the International Computer Auditing Education Association (ICAEA) indicates that 69% of global participants exhibit a positive and proactive attitude towards using AI for audit purposes, while 78% of participants consider audit software with AI features the most suitable for leveraging AI technology in audit tasks.

The need for auditing AI systems arises from concerns related to bias, explainability, security and compliance with legal frameworks. Primary reasons for auditing AI systems include:

  1. Bias and Fairness: AI systems can inadvertently amplify biases present in training data, leading to unfair outcomes. Audits help detect and mitigate such biases.
  2. Transparency and Explainability: Many AI models, particularly deep learning systems, function as “black boxes,” making it difficult to understand their decision-making processes. Audits improve transparency by evaluating how models operate.
  3. Security and Robustness: AI systems can be vulnerable to adversarial attacks and data poisoning. Audits assess the resilience of these models against security threats.
  4. Compliance with Regulations: Emerging laws like the EU AI Act and the United States’ Algorithmic Accountability Act necessitate AI audits to ensure adherence to ethical and legal standards.
  5. Trust and Public Confidence: Organizations that implement AI audits demonstrate a commitment to responsible AI usage, fostering trust among users and stakeholders.

Auditing AI can be conducted using various approaches, each suited to different aspects of AI system evaluation. The main approaches include:

  1. Technical Audits: These involve reviewing the AI system’s data, model architecture and algorithmic performance. Methods include bias detection tools, explainability techniques and security testing.
  2. Process Audits: These evaluate the governance processes surrounding AI system development and deployment, ensuring best practices are followed.
  3. Outcome Audits: These analyze the real-world impact of AI decisions by assessing outputs for fairness, accuracy and unintended consequences.
  4. Third-Party Audits: Independent audits conducted by external organizations enhance credibility.

AI auditing is crucial for ensuring ethical, fair and responsible AI use. While current approaches provide valuable insights, auditing practices must continue evolving to keep pace with AI advancements. As AI continues to advance, it will play a central role in shaping the future of financial auditing, ensuring greater transparency and trust in financial reporting.

Baker Tilly continuously evolves its auditing methodologies, according to international standards, while at the same time builds solutions for providing assurance on AI systems, using the expertise of its highly experienced professionals and the overall global Baker Tilly network.

Learn more about Baker Tilly at bakertilly.ca

Connect |  Facebook | LinkedIn | Twitter | YouTube | Instagram