The EU Artificial Intelligence (AI) Act (Draft 1) was recently published. Being leaders in AI for the public sector, ICS.AI have been following this for some time and will be conforming to the Act and its amendments. The Act is welcome, needed and appropriate.
The proposed regulation seeks to establish comprehensive rules and standards to govern safe use and development of AI within the EU. It aims to safeguard user rights, ensure safety, promote transparency, and stimulate innovation in AI research.
We’ve summarised a useful Q&A to provide answers to some of the most pertinent questions about the EU AI Act, touching on its definition, application, considerations for specific high-risk AI systems, the penalties for non-compliance, and much more.
This guide aims to provides invaluable insights to help you understand the scope and implications of this influential Act, and we will continue to provide updates as they happen.
Q1: What is the EU AI Act? The EU AI Act is a proposed regulation by the European Union that aims to set rules and standards for the use and development of AI within the EU. It is designed to ensure that AI is used in a way that is safe and respects the rights and freedoms of individuals.
Q2: Who does the EU AI Act apply to? The EU AI Act applies to providers who place AI systems on the market or put them into service in the EU, regardless of whether they are established within the EU or in a third country. It also applies to users of AI systems located within the EU, and to providers and users of AI systems located in a third country if the output produced by the system is used in the EU.
Q3: What is considered a high-risk AI system under the EU AI Act? High-risk AI systems are those that pose significant risks to the health, safety, or fundamental rights of persons. This includes AI systems used in critical infrastructures, education, employment, essential private and public services, law enforcement, migration, asylum and border control, and administration of justice and democratic processes.
Q4: What are the requirements for high-risk AI systems? High-risk AI systems must meet certain requirements before they can be put on the market, including conformity assessments, risk management systems, technical documentation, record-keeping, transparency and provision of information to users, human oversight, and robustness, accuracy, and security.
Q5: What are the prohibited AI practices under the EU AI Act? The EU AI Act prohibits AI practices that cause physical or psychological harm to people, manipulate human behaviour through subliminal techniques or exploiting vulnerabilities, use social scoring by governments for general purposes, and carry out real-time remote biometric identification systems in public spaces by law enforcement, except in specific circumstances.
Q6: What are the consequences for non-compliance with the EU AI Act? Non-compliance with the EU AI Act can lead to significant consequences. For instance, AI systems that do not meet the requirements set out in the Act may not be placed on the market or put into service. Providers who fail to comply with the Act may face penalties, including fines.
Q7: How does the EU AI Act protect fundamental rights? The EU AI Act is designed to protect fundamental rights by ensuring that AI systems are used in a way that respects human dignity, freedom, democracy, equality, the rule of law, and respect for human rights. It prohibits AI practices that are harmful or discriminatory and requires high-risk AI systems to meet strict requirements to ensure they are safe and respect users' rights.
Q8: How does the EU AI Act promote transparency in AI? The EU AI Act promotes transparency by requiring providers of AI systems to provide clear and adequate information about the system, including its capabilities, limitations, and the manner in which it is to be used. This information must be provided in a manner that is understandable to the user.
Q9: What is the role of national competent authorities in the EU AI Act? National competent authorities have various roles under the EU AI Act, including carrying out market surveillance activities, taking appropriate measures to address non-compliance, and ensuring that penalties for non-compliance are applied.
Q10: How does the EU AI Act impact AI research? The EU AI Act is designed to support AI research and innovation within the EU. It includes provisions to ensure that the rules do not unnecessarily restrict the development and use of AI. Regulatory sandboxes could be very useful for the promotion of AI and are welcomed by certain stakeholders, especially the Business Associations. The Act encourages a risk-based approach, which is considered a better option than blanket regulation of all AI systems. The types of risks and threats should be based on a sector-by-sector and case-by-case approach. Risks also should be calculated taking into account the impact on rights and safety. This approach allows for flexibility and innovation in the field of AI research.
Q11: What is the impact of AI systems on the access to and enjoyment of certain essential private and public services? AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. (Page 23)
Q12: What are the concerns regarding AI systems used in law enforcement? Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. (Page 23)
Q13: What is the scope of the EU AI Act? This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). (Page 17)
Q14: What is the risk-based approach followed by the EU AI Act? In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. (Page 17)
Q15: What are the considerations for AI systems used in education or vocational training? AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. (Page 22)
Q16: What are the considerations for AI systems used in employment, workers management and access to self-employment? AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. (Page 22)
Q17: What are the obligations of high-risk AI systems under the EU AI Act? High-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment. The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation. Therefore, the classification as high-risk is not based on the AI system itself, but rather on its intended use and the potential harm it could cause.
High-risk AI systems include those used to evaluate the credit score or creditworthiness of natural persons, as they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for determining whether public assistance benefits and services should be denied, reduced, revoked or reclaimed by authorities are also classified as high-risk.
In the law enforcement context, AI systems intended to be used for individual risk assessments, polygraphs and similar tools or to detect the emotional state of a natural person, to detect ‘deep fakes’, are classified as high-risk. These systems need to meet high standards of accuracy, reliability, and transparency to avoid adverse impacts, retain public trust and ensure accountability and effective redress.
The EU AI Act also establishes common normative standards for all high-risk AI systems to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights. These standards should be non-discriminatory and in line with the Union’s international trade commitments. (Pages: 10, 17, 23, 40)
Q18: What is the role of the European Artificial Intelligence Board? The European Artificial Intelligence Board should be established to facilitate the consistent application of this Regulation across the Union. The Board should be composed of representatives of the national supervisory authorities and the Commission. The Board should have a purely advisory role and should not have the power to adopt legally binding decisions. (Page 39)
Q19: What are the considerations for AI systems used in migration, asylum and border control? AI systems used in migration, asylum and border control, notably for verifying the authenticity of travel documents, for assessing the eligibility for visa or for international protection, for managing and controlling the external borders or for detecting irregular stay in the territory of the Member States, should be classified as high-risk, since they may have a significant impact on the rights and freedoms of individuals, including their right to asylum and protection from refoulment. (Page 22)
Q20: What are the considerations for AI systems used in administration of justice and democratic processes? AI systems used in the administration of justice and democratic processes, notably for predicting the occurrence of criminal offences, for assessing the risk of recidivism, for predicting and assessing the risk of victimisation, for informing decisions on detention, for evaluating the reliability of evidence, for informing judicial decisions on sentencing, and for profiling natural persons to make predictions about their behaviour or personal aspects should be classified as high-risk. The use of such systems may have a significant impact on the rights and freedoms of individuals, including their right to a fair trial and the presumption of innocence. (Page 22)
Q21: When will the Act come into force? The Act may not come into force until 2026, and revisions are likely, given how rapidly AI is advancing. The legislation has already gone through several updates since drafting began in 2021.
You can find out more about the proposed EU Artificial Intelligence Act here.
Get in touch if you’d like to discuss the Act in more detail.