- The AI Act is a legislative proposal aimed at regulating Artificial Intelligence (AI) systems within the European Union (EU). Its aim is to ensure safety, transparency, and respect for fundamental rights in the development and use of AI.
- In particular, the current debate has focused on the opposition between, on the one hand, the need for regulation to prevent the major risks associated with AI, in particular generative AI, and the need to set up a competitive framework for European AI companies. Explain has developed the first technology applying LLMs to public data and wishes to contribute to the debate, providing feedback on the deployment of its AI technology to its customers over the past two years.
Objectives of the Explain approach
- Understand and anticipate the repercussions of this legislation for its activity and development in Europe and in the world.
- Collaborate closely with stakeholders to co-build a regulatory environment conducive to innovation and the responsible development of AI in Europe.
Major implications of the proposed legislation for Explain
- Explain Product Classification : the AI Act proposes a classification of AI systems according to their level of risk, from “unacceptable” to “minimal”. Explain's product, which focuses on the analysis of public data, could be classified as “limited risk” or “moderate risk”, according to specific legislative criteria. Such a classification would lead to various transparency and compliance obligations.
- Responsibility : The AI Act could introduce accountability mechanisms for errors or failures in AI systems. Even though Explain's product is designed to be more reliable than other models, like ChatGPT, it's critical to have redress mechanisms in place to deal with potential issues.
- Transparency : The AI Act highlights the importance of transparency in AI systems. Explain may need to provide detailed documentation explaining the operating principles of its model, its limitations, and the means implemented to ensure the reliability of the results.
- Compliance and audits : the AI Act could require Explain to undergo regular audits, conducted by third parties, to ensure that its AI system complies with legislative requirements.
- Data protection and processing : Because Explain indexes its own data, it may be subject to strict data protection, origin, and processing requirements. In addition, any use of sensitive or personal data could result in additional obligations.
1. Clarify prohibited and high-risk uses
Not all AI systems present the same level of risk. It is crucial that regulations reflect this, allowing low-risk applications, like Explain's, to benefit from a less restrictive framework to encourage innovation.
2. Avoid unnecessary red tape
Inspired by the GDPR, whose implementation has balanced compliance costs, it is essential that the AI Act avoids unnecessary administrative burdens. Audits, in particular, should be proportionate to the size and resources of businesses. The AI Act should also clarify how these compliance audits should be conducted, and who is responsible for them (are there “AI ethics officers” in companies, equivalent to IT correspondents and freedoms for the RGPD?).
3. Clarifying the relationship between the AI Act and the GDPR
Most AI systems today are based on the processing of big data. For this reason, it is essential that the draft AI Regulation be better aligned with the GDPR. As it stands, this text represents new risks of legal uncertainty due to its difficult reconciliation with existing regulations.
1. Creation of an AI Act steering committee
- Objective : ensure a flexible and informed adaptation of the regulation.
- Composition : representatives of various companies in the sector, including startups, ethics experts, technology lawyers, and members of civil society. The diversity of members would ensure a comprehensive and balanced perspective.
- Role : review legislative proposals, suggest improvements, and ensure that the interests of startups are taken into account. It would also serve as a platform to share feedback and best practices.
- Benefits : by involving industry players from the start, the committee could help prevent the unintended consequences of regulation, while ensuring smoother implementation.
2. Creation of an “EU AI” quality label
- Objective : establish a trusted standard in AI solutions developed in Europe.
- Criteria : the label would be awarded to companies complying with strict criteria in terms of transparency, ethics, respect for personal data and security.
- Procedure : an independent audit could be carried out to assess the compliance of companies seeking to obtain the label.
- Benefits : in addition to recognizing quality and reliability, the label could facilitate access to markets and increase the international competitiveness of European companies. For users, this label would be synonymous with trust and security.
3. Supporting R&D
- Objective : Explain recommends strengthening initiatives to encourage research and development in the field of AI.
- Concrete measures :
- Grant programs : Establishment of specific financial programs to support AI R&D projects, with particular attention to startups and SMEs.
- Tax incentives : Introduction of tax credits or deductions for companies that invest significantly in AI R&D.
- Public-private partnerships : Encouraging collaborations between universities, research centers and businesses to promote knowledge transfer and innovation.
- Benefits : These measures would stimulate innovation while ensuring that AI developments meet high ethical and security standards.
Learning from the GDPR : The GDPR has succeeded in clarifying data management practices without imposing an excessive financial burden on businesses. He also highlighted the practices of tech giants, improving transparency for users. The AI Act is an opportunity to make European citizens understand how LLMs work, and what they really enable, just as the GDPR made it possible to understand how our data was used.
Training and educating : rather than adopting a punitive approach, highlight the importance of training and education for AI professionals. By having a well-trained and ethically aware workforce, the risks associated with AI can naturally be minimized.
Ensuring fair access to data : for businesses working with public data, it is crucial to have equitable and non-discriminatory access to this data. Regulation should encourage transparency and fairness in accessing databases, thus ensuring that businesses can innovate.
Recognize industry standards : In the case of Explain, which focuses on public data, the recognition of sector-specific standards and best practices could facilitate compliance while encouraging innovation.
Promote harmonization of standards : promoting the harmonization of AI standards and regulations at the international level is essential to avoid disparities between regions.
Explain is among the first companies to have found a concrete application for Large Language Models. After two years of development, Explain raised 6 million euros (Les Échos) to finance the growth of its product, the first AI assistant for professionals in contact with the public sector. Explain allows its customers (including major European groups such as Engie, Veolia or RWE) to optimize their development in the field by automating their most repetitive and daunting tasks. To do this, Explain's AI finds, extracts, analyzes, and synthesizes information drawn from over 50 million documents, many of which have never been indexed before, including by Google. Explain customers thus say that they can process 3 times more information in 5 times less time.