André Buser

AI Governance & Internal Audit | Applied Data Science | CISA · CISSP · CFE · AAIA™

About Me

I audit AI systems at one of the world’s largest pharmaceutical companies. That means reviewing whether governance frameworks actually hold up under scrutiny, whether controls around GenAI deployments are real or just documented, and whether the risk picture management sees matches what’s in the system.

What makes this different from standard IT audit: I have the technical depth to go beyond document review. My Master of Applied Data Science from Michigan (GPA 3.97) means I can read a model card, trace a training pipeline, and find where governance gaps turn into actual risk, not just check a compliance box. I hold CISA, CISSP, CFE, and ISACA’s Advanced in AI Audit™ (AAIA™).

I also build the tooling. LLM pipelines that generate Audit Planning Memorandums and work programs from source documents; work that used to take days. An interactive risk visualization that turns an AI risk taxonomy into something you can actually explore. A prompt library for audit workflows. Most of it started as personal experiments. A few ended up in production.

Outside the day job: advisory board at CISS LTD, and Plateforme Tripartite, industry, academia, and regulators in the same room on the harder AI governance questions. It’s worth the time.

Data Science and Machine Learning Projects

Exploring Microsoft Responsible AI Toolkit

In Progress

Jul 2024 - Present

github.com

Applying Microsoft's Responsible AI toolkit to healthcare ML, focusing on fairness assessment, interpretability, and error analysis.

Exploring and evaluating Microsoft’s Responsible AI toolkit using the scikit-learn Diabetes dataset. The project covers fairness assessment, interpretability techniques, and error analysis in a healthcare ML context. The goal is hands-on experience with the same responsible AI tooling I evaluate in audit work.

Readability Optimizer

In Progress

May 2020 - Present

github.com

Originally developed in my free time to practice Python, Readability Optimizer is a prototype tool designed to help authors improve the readability of their texts.

Started as a Python learning project before my MADS at the University of Michigan. Became the foundation for an in-house audit report quality tool.

The tool imports audit reports, scores each observation paragraph for readability, and flags sections that need rewriting. It runs multiple readability tests (Flesch-Kincaid, Gunning Fog, etc.) and produces actionable suggestions.

Since expanded with LLM-based text improvement recommendations for audit report quality.

Decode Dementia

Oct 2023 - Dec 2023

github.com

Key findings indicate that TBIs, especially those involving loss of consciousness, alongside the presence of the APOE ε4 allele and male gender, significantly increase dementia risk, while higher education levels seem to offer some protective effects.

MADS capstone project investigating what happens after traumatic brain injuries, specifically the link to dementia. We used causal inference methods to trace how TBI severity, genetics (APOE ε4), and demographics interact to affect dementia risk.

The complete report can be accessed here: Decode Dementia Report

Analyze GDPR Fines

Nov 2021 - Dec 2021

github.com

Data visualization of GDPR fines imposed by European data protection authorities (2018-2021). Identifies non-compliance focus areas and patterns relevant to healthcare sector data privacy strategy.

Report: Analyze GDPR Fines

Predicting Text Difficulty

Nov 2022 - Dec 2022

github.com

Our analysis found that the fine-tuned Random Forest model demonstrated the highest accuracy score of 0.7544.

Classifying sentences from Simple English Wikipedia to determine which ones need simplification for readers with lower reading proficiency (students, children, non-native speakers). Used both supervised and unsupervised learning techniques for feature extraction and sentence classification.

The complete report can be accessed here: Predicting Text Difficulty Report

Audit Analytics Tools

Fake Transaction Data Generator

github.com

Realistic synthetic transaction data for audit analytics testing, fraud detection model training, and data science education — without exposing confidential financial records.

A Python-based generator that produces realistic fake financial transaction datasets. Useful for testing audit analytics scripts, training anomaly detection and fraud classification models, and data science education. Lets practitioners work with realistic data structures without confidentiality constraints.

Applying Data Analysis in Internal Audit

In Progress

Aug 2024 - Present

github.com

Resources and working examples for applying data analysis in internal audit. Includes code, datasets, and visualizations that go beyond the high-level guidance available from IIA and ISACA.

Publications

The Importance of AI Use Cases in System Classification and Risk Assessment

Nov 2024

medium.com

The same AI system can carry very different risks depending on how it’s used. A ML model trained on financial data is low-risk when used for internal discussions but high-risk when preparing public financial statements. This distinction matters for classification under frameworks like the EU AI Act, and most organizations get it wrong by classifying the system rather than the use case.

Applying Data Analysis in Internal Audit

Aug 2024

medium.com

Most IIA and ISACA guides on audit analytics stay high-level. As someone with a data science background doing audit work, I kept running into the gap between what those guides describe and what you actually need to do in practice.

This article bridges that gap. It adapts academic research methodology concepts to the practical world of internal audit, with code examples and concrete techniques you can apply to real engagements.

Algorithm Ossification

The Feedback Loop Between Algorithms and the Real World

Jul 2024

medium.com

Algorithms recommend what we watch and decide whether we get loans. But what happens when they start reshaping the world they were trained to describe? Algorithm ossification is the feedback loop where algorithmic decisions shape the data that future algorithms learn from, reinforcing existing patterns rather than reflecting reality.

Understanding AI System Classification and Risk Assessment

Jul 2024

medium.com

In the evolving field of artificial intelligence (AI) governance, two important concepts can get mixed up: AI System Classification and AI System Risk Assessment. This confusion can lead to problems in managing the related AI risks. This article aims to explain these concepts and how they relate to each other, using insights from major AI governance frameworks.

Understanding Dataset Splitting in Machine Learning

Jul 2024

medium.com

Best practices for splitting datasets in machine learning projects. Covers train/validation/test splits, stratification, time-series considerations, and common pitfalls that lead to data leakage.

Data Ethics Checklist

Case Study

Jul 2021

github.com

A Data Ethics Checklist for data science projects, built for MADS SIADS593 (Ethics). Structured around the CRISP-DM model, it walks through ethical questions at each project phase: data collection, preparation, modeling, evaluation, and deployment. Covers transparency, accountability, fairness, and explainability.

Data Science Compass

A Personal Manifesto

Apr 2021

github.com

Written for MADS SIADS501 (Being a Data Scientist) at the University of Michigan, April 2021. A personal set of principles for how I approach data science work: methodology choices, ethical commitments, and quality standards. Still updated as my practice evolves.

Earlier Publications (LinkedIn Pulse)

Decomposing the Term Information Security Risk

Apr 2017

linkedin.com

Three ways to explain “what is an information security risk?” depending on your audience: a one-line formula, a standards-based definition, and a deep dive using ISO 31000:2009. Written from the perspective of an IS Auditor and Information Security Risk Manager.

How to Write (Better) Information Security Risks

Oct 2016

linkedin.com

Risk assessments often blur the line between risks, threats, and vulnerabilities. This matters because the choice of mitigation controls depends on getting that distinction right. The article walks through how to separate these concepts and write risks that are actually useful for decision-making.

Experience

Major Global Pharmaceutical Company

Senior Manager Internal Audit | 16+ Years

Apr 2010 - Present

20+ years across internal audit, information security, and global security in one of the world's largest pharmaceutical companies.

Currently leading AI governance audits, reviewing how AI Strategy, risk frameworks, and GenAI deployment controls work in practice. Built the team’s Audit Planning Memorandum (APM) generation system from scratch (Copilot-optimized, YAML-driven), cutting planning document prep from days to hours. Created an interactive AI risk visualization and a prompt library for structured audit workflows.

Previously served as Senior Manager Global Security (2022-2024), where I built an LLM prototype for automated investigation report drafting (GPT-4, Llama2, in-house model), a sanctions extraction system using fuzzy matching, and automated case management with data quality assurance.

Earlier roles include Director of Information Security and Risk Management (team of 15, SOX IT compliance, GEMINI EPS vendor transition) and Senior IT Auditor for vendor audits.

Advisory & Industry Roles

AI Governance

Ongoing

Advisory Board Member at CISS LTD, providing strategic guidance on AI initiatives and technology governance. Member of Plateforme Tripartite, bringing industry, academia, and regulators together on AI ethics and digital innovation questions.

Key Certifications

Advanced in AI Audit™ (AAIA™)

ISACA

Jul 2025

ISACA’s credential for AI audit assurance, covering AI risk assessment, AI governance frameworks, and audit procedures for AI systems in regulated environments. The certification validates the ability to evaluate AI controls, assess model risk, and audit AI governance structures against standards like the EU AI Act and ISO 42001. Applied directly in AI governance audits in regulated pharma. Credential ID: 252865917.

AI in Health Care — From Strategies to Implementation

Harvard Medical School Professional Education

Oct 2025

Executive program at Harvard Medical School covering AI strategy, clinical implementation frameworks, and governance in health care. Topics include ML model validation in clinical settings, regulatory considerations for health AI, and building organizational readiness for AI adoption. Relevant to pharma audit work where AI systems touch patient data and clinical workflows. Credential ID: 164056593.

Certified SAFe 6 Agilist

Scaled Agile, Inc.

Jun 2025

SAFe 6 certification for Lean-Agile leadership at enterprise scale. Covers Lean portfolio management, Agile product delivery, and organizational agility. Applied in audit engagements assessing Agile transformation maturity and DevOps governance.

Certified Fraud Examiner (CFE)

Association of Certified Fraud Examiners (ACFE)

Feb 2024

Fraud detection, prevention, and investigation. Covers financial transaction analysis, fraud law, investigative techniques, and anti-fraud controls. Applied directly in data theft investigations and Data Loss Prevention (DLP) work during my tenure as Senior Manager Global Security (2022-2024), where I built automated investigation tools and led forensic data analytics.

Certified Data Protection Officer (CDPO)

PECB

Feb 2019

Data protection strategy, GDPR compliance, and privacy impact assessments. Covers breach management, regulatory liaison, and translating privacy requirements into operational practice. Applied across audit and governance roles in regulated pharma, particularly in engagements involving patient data, employee records, and cross-border data transfers. CDPO bridges the gap between legal privacy requirements and the technical controls that auditors actually test.

Certified Information Systems Security Professional (CISSP)

ISC2

Jan 2016

Information security architecture across all CBK domains: risk management, asset security, network protection, identity management, security operations, and secure development. Applied as Director of Information Security and Risk Management (2014-2017), leading an international team of 15, chairing the vendor governance board, and managing SOX IT compliance. The CISSP provides the security architecture foundation for evaluating AI system security controls in current audit work.

Certified Information Systems Auditor (CISA)

ISACA

Aug 2011

IT audit, control evaluation, and compliance. Foundation certification for all my audit work since 2011, covering IT governance, security assessment, and risk-based audit strategy. Starting with vendor IT audits and SOX testing, now applied to AI governance audits, cloud security assessments, and data governance reviews. The CISA methodology underpins how I structure every audit engagement. Credential ID: 1193963.

Education

University of Michigan

MSc Applied Data Science

Jan 2021 - Dec 2023

umich.edu

University of Michigan School of Information — ranked among the top data science programs in the U.S.

Completed the Master of Applied Data Science (MADS) program, graduating December 2023 with a 3.97 GPA. The curriculum covered machine learning, deep learning, NLP, statistical inference, data engineering, and visualization end-to-end.

AI-relevant coursework included supervised and unsupervised learning, neural networks, causal inference, and responsible AI (fairness, interpretability, ethics). Capstone project used causal inference to investigate TBI and dementia risk (Decode Dementia). Additional courses in generative AI fundamentals, LLM programming (Llama), and data science ethics taken post-graduation (2024).

This degree is the technical foundation for my current AI governance audit work — it’s one thing to review an AI risk framework on paper, another to actually understand what’s happening inside the model.