Key data
| Regulation | Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law |
|---|---|
| Official reference | OJ:L_202601081 |
| Publication | May 13, 2026 |
| Entry into force | Not specified |
| Affected parties | Technology companies, public administrations and entities that develop or use AI systems |
| Category | European Regulation |
| Relationship with other regulations | Complements the European AI Regulation (AI Act) |
| Geographic scope | Council of Europe member states and non-EU countries that accede |
Companies using artificial intelligence to make decisions about people now face a new international obligation. The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, published on May 13, 2026 with reference OJ:L_202601081, is the world's first legally binding international treaty on AI.
It is not a statement of intent. It is a treaty with legal force that obliges signatory states to ensure that AI systems respect human rights, democracy and the rule of law. And that has direct consequences for organizations that develop or deploy AI.
What does this regulation establish?
The Framework Convention creates a framework of obligations structured around three main pillars that states must transpose into their legal systems and that directly affect organizations operating with AI:
| Obligation | What it consists of |
|---|---|
| Impact assessments | AI systems must undergo impact assessments during their lifecycle, especially when they may affect fundamental rights |
| Transparency | Organizations must ensure that the operation of their AI systems is understandable and explainable to affected persons |
| Accountability | There must be an identifiable party responsible for decisions made or assisted by AI systems, with clear oversight mechanisms |
The convention covers the complete lifecycle of AI systems: from their design and development to their deployment and retirement. It is not limited to the usage phase.
Regarding its relationship with the European AI Regulation (AI Act), this convention does not replace or modify it: it complements it. While the AI Act primarily regulates technical and market risks, this convention adds a human rights layer with broader geographic scope, as it includes countries that are not EU members but are Council of Europe members, plus the possibility of accession by third countries.
Economic and operational impact
The impact is not a direct fine or fee. It is an operational adaptation cost that organizations must assume to comply with the new standards. The main spending areas are:
- Audits and impact assessments: Companies that have not yet implemented formal impact assessment processes for their AI systems will need to design and integrate them into their workflows.
- Documentation and transparency: It will be necessary to document how AI systems work, what data they use and how they affect people, which requires technical and legal resources.
- Review of compliance frameworks: Organizations already complying with the AI Act will need to verify whether their compliance framework also covers the human rights dimension required by this convention.
- Internal training: Technology, legal and compliance teams must understand the new obligations and how to apply them in the lifecycle of each AI system.
For companies that already have an AI Act compliance program, the additional effort will be minimal. For those that have not yet started that process, this convention adds urgency to adaptation.
Who does it affect?
- Technology companies that develop AI systems, platforms, models or tools based on artificial intelligence.
- Public administrations that use AI in decision-making processes affecting citizens (grant allocation, case evaluation, surveillance, etc.).
- Private sector entities that deploy AI in areas with impact on fundamental rights: personnel selection, credit scoring, healthcare, educational services, insurance.
- Organizations with operations in non-EU countries that accede to the convention, given its broader geographic scope than the AI Act.
- Legal advisors, consulting firms and audit companies that provide regulatory compliance services to clients with AI systems.
Practical example
A human resources company uses an AI system to filter applications and score candidates before the interview. Under this convention, that company must:
- Conduct an impact assessment of that system, documenting how it may affect candidates' rights (non-discrimination, privacy, equal opportunities).
- Ensure transparency: candidates must be able to know that an AI system has been involved in evaluating their application.
- Establish an accountability mechanism: there must be a responsible person who can review, explain and, if necessary, correct the system's decisions.
If the company already complies with the AI Act for this type of high-risk system, it probably has much of this work done. If not, this convention adds another reason to start adaptation without delay.
What should companies do now?
- Inventory AI systems in use: Identify all AI systems your organization develops or deploys, especially those involved in decisions affecting people.
- Assess whether there is impact on fundamental rights: For each system identified, analyze whether it may affect rights such as privacy, non-discrimination, access to services or freedom.
- Review existing compliance framework: If you already have an AI Act compliance program, verify whether it also covers the human rights requirements demanded by this convention. If you don't have any program, now is the time to start one.
- Implement or strengthen impact assessments: Design a formal impact assessment process for AI systems with potential impact on rights, and integrate it into the system's lifecycle from design onwards.
- Document transparency and accountability: Ensure there is clear documentation about how each system works, who is responsible for its decisions and how affected persons can challenge them.
- Train involved teams: Legal, technology and compliance must understand the convention's obligations and how to apply them in daily practice.
- Monitor national transposition: Since the entry into force date has not been specified, closely follow how each signatory state transposes the convention's obligations into its national legislation, as this will determine the concrete timelines and requirements applicable.
Frequently asked questions
Which companies does the Council of Europe Framework Convention on AI affect?
It affects technology companies, public administrations and any entity that develops or deploys AI systems in public sectors or with impact on people's fundamental rights.
How does this convention differ from the European AI Regulation (AI Act)?
The Framework Convention complements the AI Act by adding a specific human rights dimension and has broader geographic scope, including non-EU countries that accede to the treaty.
What specific obligations does this AI convention impose on companies?
Companies must conduct impact assessments on their AI systems, ensure transparency in their operation and establish accountability mechanisms throughout the lifecycle of the AI system.
When does the Framework Convention on AI and Human Rights enter into force?
The regulation was published on May 13, 2026. The entry into force date has not been specified in the published text.