Key data
| Regulation | Resolution of April 10, 2026, from the Under-Secretariat, publishing the Agreement between the Spanish Agency for Supervision of Artificial Intelligence and the Institute of Women, O.A., for collaboration in the supervision of AI systems, the protection of fundamental rights, literacy and the development of think tanks in the field of Artificial Intelligence |
|---|---|
| BOE Publication | April 20, 2026 |
| Entry into force | April 10, 2026 |
| Signatory bodies | AESIA (Spanish Agency for Supervision of Artificial Intelligence) and Institute of Women, O.A. |
| Affected sectors | Companies developing and using AI systems, especially in HR, finance and services |
| Category | Data Protection / AI Supervision |
| Identified risk areas | Personnel selection, credit granting, health and public services |
If you have an AI system that decides who moves to the next phase of a selection process, who receives a loan or what medical treatment is recommended, your company is in the focus of this new supervision. The agreement signed between the Spanish Agency for Supervision of Artificial Intelligence (AESIA) and the Institute of Women, published in the BOE on April 20, 2026 (reference BOE-A-2026-8672), establishes a framework for collaboration to supervise these systems from a gender and fundamental rights perspective.
It is not a regulation that imposes direct fines on its own, but it does expand the scope and criteria of existing regulatory supervision. Companies that were already adapting to the European AI Regulation should know that in Spain that adaptation now includes additional specific control over gender bias.
What does this regulation establish?
The agreement articulates three concrete lines of action between AESIA and the Institute of Women:
- Joint supervision of AI systems with criteria for non-discrimination based on gender and protection of fundamental rights.
- AI literacy with an equality focus, aimed at professionals and citizens.
- Joint think tanks to develop methodologies and frameworks for analyzing gender bias in AI systems.
From a business perspective, the most relevant element is the first: regulatory supervision will explicitly incorporate non-discrimination criteria based on gender. This means that supervisory bodies can request information, conduct audits or require modifications to AI systems operating in the sectors identified as higher risk.
| Area of application | Type of AI system affected |
|---|---|
| Human Resources | Selection systems, candidate screening, performance evaluation |
| Finance and credit | Credit scoring systems, loan approval, insurance |
| Health | Diagnostic systems, triage, treatment recommendation |
| Public services | Resource allocation systems, automated citizen service |
Economic and operational impact
This agreement does not establish its own sanctions or direct amounts. Its economic and operational impact materializes through the supervision mechanisms it activates:
- Additional audits: AI systems in the indicated sectors may be subject to specific reviews on gender bias, beyond those already provided for by the European AI Regulation.
- Modification requirements: If an audit detects gender bias in a system, the company may receive requirements to correct it, with the operational and technical costs that this entails.
- Cost of preventive adaptation: Companies that act before being audited will have to invest in model review, bias documentation and, if necessary, algorithm redesign.
- Reputational risk: An audit with a negative result in terms of gender discrimination can have direct reputational impact, especially in companies with public exposure.
The agreement strengthens the application of the European AI Regulation in Spain with an additional layer of control. Companies that were already investing in AI Regulation compliance should review whether that compliance explicitly includes the analysis of gender bias.
Who does it affect?
- Companies developing AI systems that market solutions for personnel selection, credit scoring, health or public services.
- Companies using AI systems in HR processes (CV screening, automated interviews, performance evaluation).
- Financial entities that use AI models for credit granting or risk assessment.
- Health sector companies with automated diagnostic or triage systems.
- Public administrations and companies providing public services with automated allocation systems.
- HR and Compliance departments of any medium or large company with automated selection processes.
- Technology providers that integrate AI into their products for the above sectors.
Practical example
A company with 200 employees uses AI-based personnel selection software to screen CVs received in its hiring processes. The system automatically scores candidates and filters who moves to the interview stage.
With this agreement in force, AESIA—in collaboration with the Institute of Women—can initiate supervision of that system to verify whether it generates gender bias: for example, if it penalizes applications from women in certain technical or management positions, or if it favors male profiles in historically male-dominated sectors.
If supervision detects a bias, the company may receive a requirement to modify the model, document the changes and prove that the corrected system does not discriminate based on gender. That process has a real technical cost: model review, parameter adjustment, new validation and documentation for the regulator. Acting preventively—before receiving that requirement—is cheaper and avoids the reputational risk associated with an audit with a negative result.
What should companies do now?
- Identify all AI systems in use that intervene in selection, credit, health or public service processes. Include both those developed internally and those acquired from third parties.
- Review whether those systems include gender bias analysis in their technical documentation and validation processes. If they don't, it's a gap that needs to be corrected.
- Request from technology providers that they prove their systems have been validated against gender bias, especially if they operate in the sectors indicated by the agreement.
- Update regulatory compliance documentation to include the gender perspective as an explicit criterion in the evaluation of AI systems, aligning it with the requirements of the European AI Regulation and this agreement.
- Assign internal responsibility (DPO, Compliance officer or HR manager) for monitoring requirements that may arise from joint AESIA-Institute of Women supervision.
- Consult with an AI compliance specialist if your company operates in any of the identified high-risk sectors, before an audit arrives.
Frequently asked questions
What AI systems fall under supervision by this agreement?
Systems that affect personnel selection processes, credit granting, health and public services are those expressly indicated as areas of greatest risk of gender bias. Supervision is exercised jointly by AESIA and the Institute of Women.
What can they require from me as a company if I use AI in personnel selection?
Supervisory bodies can request documentation on how the system works, request audits to verify the absence of gender bias, and require modifications if bias is detected. They can also request evidence that the system has been validated against gender discrimination criteria.
Is this agreement mandatory or voluntary?
The agreement itself is mandatory for AESIA and the Institute of Women—it commits them to joint supervision. For companies, the mandatory nature comes from the European AI Regulation and Spanish data protection law, which this agreement reinforces with specific gender criteria.
What happens if I don't comply with a supervision requirement?
Non-compliance with a requirement from a regulatory body can result in administrative proceedings, fines under the European AI Regulation or data protection law, and reputational damage. The specific penalties depend on the severity and the applicable regulation.
Do I need to modify my AI systems immediately?
Not immediately, but you should conduct a review now to identify potential gender bias risks. Acting preventively is more efficient than waiting for a supervision requirement. The agreement is in force as of April 10, 2026, so supervision can begin at any time.
Where can I find more information about this agreement?
The full text is published in the BOE (Spanish Official Gazette) under reference BOE-A-2026-8672. You can also consult AESIA's website for guidance on AI system supervision and compliance requirements.