Ethique et AI

AI Ethics, Governance, and Regulation

Author: Proximus NXT
22/01/2026
Artificial intelligence

The rapid rise of artificial intelligence presents immense opportunities: improving healthcare, automating tasks to boost productivity, supporting learning and training, and assisting in daily life. But these opportunities come with significant challenges. How can we ensure that AI remains human-centered, without misuse or abuse? Three dimensions are inseparable: ethics, governance, and regulation.

 

Ethics: Establishing Foundational Principles

AI ethics is not just about ‘avoiding bias.’ It involves defining the values and principles that should guide the development and use of these technologies:

  • Fairness and Non-Discrimination: ensuring recruitment AIs do not disadvantage women or minority groups.
  • Transparency and Explainability: enabling users to understand how an algorithm makes its decisions.
  • Respect for Human Rights: ensuring AI applications do not lead to abusive surveillance or restrictions on freedom.
  • Sustainability: reducing the ecological footprint of AI models, which are often energy-intensive.

In November 2021, all 193 UNESCO member states adopted the Recommendation on the Ethics of AI¹, a normative text aimed at establishing these ethical foundations on a global scale.

 

Governance: Translating Principles into Practice

Defining values is not enough; they must also be applied. This is the role of AI governance, which involves implementing concrete control and oversight mechanisms. This includes:

  • Internal ethics committees to review sensitive projects,
  • Regular audit systems to detect and correct biases,
  • Risk management processes comparable to those used in cybersecurity,
  • Explainability mechanisms that make algorithms understandable to decision-makers and users.

In March 2023, figures such as Elon Musk, Steve Wozniak, and Yuval Noah Harari signed an open letter calling for a moratorium on the development of the most powerful AI². Their goal: to create a pause for reflection to define robust governance protocols before these technologies spiral out of control. The Future of Life Institute subsequently published a follow-up report, Policymaking in the Pause³, which explores concrete ways to translate this governance into public policy.

 

Regulation: Establishing the Legal Framework

Finally, regulation aims to legally frame the use of AI and is increasingly recognized as a strategic priority.

In Europe, the AI Act, which came into force in August 2024, sets strict rules, particularly for high-risk systems (healthcare, education, public safety, etc.). It establishes a risk classification (unacceptable, high, limited, minimal) and defines the corresponding obligations.

In the United States, the Executive Order on AI (October 2023)⁵ provides guidelines for government agencies and encourages stricter oversight of large AI models.

In Canada, a voluntary Code of Conduct⁶ commits private actors to develop AI responsibly.

Tech companies are also implementing concrete tools, such as watermarking AI-generated content (Google, Meta, OpenAI, Midjourney) or the “Coalition for Content Provenance and Authenticity (C2PA)”⁷, launched by Microsoft, Adobe, and Intel.

Europol has additionally warned of the tangible risks of poorly regulated AI: large-scale disinformation, automated scams, and the creation of malicious software⁴.

 

A Shared Responsibility

Ethics, governance, and AI regulation cannot be separated. Regulators set the framework, companies must implement concrete practices, and civil society plays a watchdog role.

At Proximus NXT, we are committed to strictly adhering to European regulations, particularly those related to data protection and privacy. We integrate explainability and control mechanisms to ensure AI serves the interests of businesses and their clients, while upholding strong ethical principles.

AI is not just a matter of technology—it is also a matter of trust. And trust cannot be imposed; it is built by combining clear ethical values, operational governance, and effective regulation. Only in this way can AI become a driver of sustainable progress rather than a source of risk.

 

Sources:

1.    Éthique de l’intelligence artificielle, UNESCO
2.    Pause Giant AI Experiments: An Open Letter, Future of Life Institute
3.    Policymaking In The Pause, Future of Life Institute – https://futureoflife.org/document/policymaking-in-the-pause/
4.    The impact of Large Language Models on Law Enforcement, Europol
5.    Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House
6.    Ottawa adopte un code de conduite volontaire pour l’industrie de l’IA, Radio Canada
7.    Coalition for Content Provenance and Authenticity (C2PA)