COMPLIANCE

Artificial Intelligence Act

Secure&IT » Compliance » Artificial Intelligence Act

Artificial Intelligence Act

The AI Regulation (RAI) is designed to address not only the technical aspects, but also ethical issues and application challenges of Artificial Intelligence in various sectors. It pays special attention to data quality, security, transparency, privacy, non-discrimination and human oversight. The aim is to ensure that the AI used in Europe meets the highest standards, in line with the values and fundamental rights of the European Union.

The AI Regulation applies to:

  • Suppliers placing AI systems on the market or putting them into service in the Union, regardless of whether they are established in the EU or in a third country.
  • Responsible for the deployment of AI systems that are established or located in the EU.
  • Providers and deployers of AI systems that are established outside the EU, where the output results generated by the AI are used in the EU.
  • Importers and distributors of AI systems.
  • Manufacturers of products that introduce into the market or put into service an AI system together with their product and under their own name or brand.
  • Authorized representatives of suppliers not established in the EU.
  • Affected persons located in the EU.

The RAI does not apply to AI systems used for national security, military purposes, defense, third country authorities, or to international organizations, when they use AI systems in the framework of international cooperation agreements, for law enforcement and judicial cooperation with the EU. This is provided that they offer sufficient guarantees in relation to the fundamental rights and freedoms of individuals.

The AI Regulation establishes a number of obligations depending on the risk level of the AI, which will be assessed according to the risk that the use of the technology may pose to a person’s safety, health and fundamental rights. Specifically, AI systems are classified into four levels of risk: unacceptable, high, limited and minimal.

Systems with an unacceptable risk are those that are considered a threat to people and will be banned. Some examples would be: cognitive manipulation, social scoring or real-time and remote biometric identification systems. But, there are some exceptions to this qualification such as, for example, “a posteriori” remote biometric identification systems, where identification occurs after a significant delay. They will be allowed for the prosecution of serious crimes and only with prior judicial approval.

High-risk AI systems must comply with a series of obligations to ensure that those used in the European Union are safe and respect both the fundamental rights of individuals and the values and guarantees of the Union. Some of the obligations foreseen are: to carry out risk assessments during the entire life cycle, to be approved through appropriate procedures before use, to be supervised by natural persons, etc.

In the case of limited-risk AI systems, they must be transparent and allow users to make informed decisions. That means that users should be informed that they are interacting with an AI, except in situations where it is obvious. This includes AI systems that generate or manipulate image, audio or video content (e.g., deepfakes).

In terms of penalties, the highest penalties may be up to EUR 30 million or, in the case of a company, up to 6% of the total annual aggregate turnover of the previous financial year.

If you need more information about any of our services, please contact us.

error: ¡Lo sentimos! El contenido de esta web está protegido.