Hamilton Mann
Tech Executive, Best-selling Author of Artificial Integrity, Expert in AI for Good and Digital Transformation, Host of The Hamilton Mann Conversation
why book hamilton?
- Hamilton Mann was honoured with the Thinkers50 Distinguished Achievement Award in Digital Thinking in 2025
- He regularly speaks at events like the Web Summit, MIT Platform Strategy Summit, and the Global Peter Drucker Forum
- Hamilton authored the best-selling book, Artificial Integrity, selected as a Top AI book in 2025
Biography
Hamilton Mann ranks among the world’s leading experts on Digital Transformation and Artificial Intelligence for Good. Recognised by Thinkers50, he speaks globally on AI’s impact on society.
He has delivered keynotes at the Global Peter Drucker Forum, Web Summit, MIT MIT Platform Strategy Summit, and the Wharton Neuroscience Summit.
In 2025, he was honoured with the Thinkers50 Distinguished Achievement Award in Digital Thinking. The previous year, Thinkers50 included him in its Radar Class, spotlighting 30 of the most influential rising thinkers.
Additionally, the Who Is Who International Academy named him World Eminent Man in Digital and AI for Good.
Hamilton is the originator of the concept of Artificial Integrity.
His book, Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future(Wiley), became a best-seller. The Next Big Idea Club selected it as a must-read in 2024, and in 2025 it ranked among the Top 8 AI books.
He contributed a chapter on three books, including Driving Sustainable Innovation: How to Do Well While Doing Good; Connectedness: How the Best Leaders Create Authentic Human Connection in a Disconnected World; and GAIN: Demystifying GenAI for office and home.
An executive at Thales, a global leader in Aerospace, Cybersecurity, and Defence, Hamilton co-leads the group-wide AI initiative and digital transformation and oversees the company’s global digital marketing activities. His transformation work has been profiled by IMD in a business case study.
He is also a doctoral researcher in AI at Ecole National des Ponts et Chaussées – Institut Polytechnique de Paris, a senior lecturer at INSEAD and HEC Paris and a mentor at the MIT Priscilla King Gray (PKG) Center.
A regular contributor to Forbes, he writes about AI and its societal impact. His work also appears in SSIR, California Management Review, Rotman Magazine, and Knowledge@Wharton.
He advises the Ethical AI Governance Group (EAIGG) and serves on the board of the No More Plastic Foundation.
Hamilton hosts The Hamilton Mann Conversation, a podcast making AI and digital transformation accessible to all.
Topics
Artificial Integrity
Why artificial integrity, not intelligence is the next frontier in AI? In the race to develop ever more intelligent AI systems, we’ve often overlooked a fundamental question: Are these systems aligned with our core human values? Hamilton Mann’s talk on Artificial Integrity emphasizes that without integrity, artificial intelligence can lead us astray.
This session delves into the concept of Artificial Integrity, exploring why it’s essential for AI systems to prioritize safety, fairness, values, explainability, and reliability over mere computational prowess.
In this session, you will learn:
- The limitations of current AI systems concerning ethical considerations and societal impact.
- The five critical pillars of Artificial Integrity: Safety, Fairness, Values, Explainability, and Reliability.
- How to implement the Society Values Model, AI Core Model, and Human and AI Co-Intelligence Model to ensure AI systems align with human-centric values.
- Strategies for organizations to transition from intelligence-led to integrity-led AI development.
Whether you’re developing AI systems, leading digital transformation, or shaping policy, this session will redefine what it means to build AI that serves humanity—by upholding integrity, not just intelligence, and shifting the focus from task performance for its own sake to integrity-led outcomes.
Technological Stockholm Syndrome
Why we bond with the technology that undermine us, and how to reclaim cognitive sovereignty? Have you ever felt emotionally attached to a digital tool that frustrates you, tracks you, or even makes decisions for you? You’re not alone. As we increasingly integrate AI and digital systems into our lives, a hidden psychological process is unfolding: one that mirrors trauma response more than rational adoption. This talk introduces the concept of Technological Stockholm Syndrome, exploring how emotional dependence on technology forms not by choice, but through the collapse of our internal defense mechanisms, and what we can do about it.
In this session, you will learn:
- Why successful digital adoption often masks a psychological submission, not empowerment
- How emotional identification with technology rewires our brains and compromises autonomy
- What the ten “functional gaps” of artificial integrity are and how to detect them in your own tools and workplace
- How to build resilience through cognitive counter-mechanisms that protect your mental, emotional, and decision-making sovereignty
- Whether you’re a tech leader, policy maker, designer, or digital citizen, this session will challenge your assumptions about innovation and offer a new framework for ethical and intentional digital engagement.
The Flawed Assumption Behind AI Agents’ Decision-Making
Which AI Agents do you need and for doing what? Most AI systems are designed to follow one idealized path of decision-making: data in, analysis done, decision out. But human decision-making doesn’t work that way and it never has. Real-world decisions are messy, emotional, biased, urgent, collaborative, and often irrational. Yet these realities are rarely reflected in the models we embed into AI agents today. This session challenges the one-size-fits-all approach to AI reasoning and introduces the concept of decision-making elasticity, where integrity, not autonomy, becomes the key to meaningful machine-led decisions.
In this session, you will learn:
- The seven human decision-making models from intuition and emotion to rules, heuristics, and crisis response
- The cognitive blind spots in today’s AI agents, and why they struggle with ethical, commonsense, and reflective reasoning
- What it takes to move beyond performance toward purpose in machine-led decisions
- How to embed artificial integrity in AI design to ensure systems adapt to context, recognize their own limits, and defer to human input when appropriate
Whether you’re building AI systems, governing their deployment, or using them in high-stakes environments, this session will equip you to evaluate, “hire” and design AI agents you need, that respect human complexity and operate with a sense of artificial integrity, not just computational speed.
Hamilton Mann
Resources
Books



