[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article_59538":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":7,"nbDownloads":11,"excerpt":12,"lang":13,"url":14,"intro":8,"featured":4,"state":15,"author":16,"authorId":17,"datePublication":21,"dateCreation":22,"dateUpdate":23,"mainCategory":24,"categories":40,"metaDatas":46,"imageUrl":47,"imageThumbUrls":48,"id":56},false,"**Artificial Intelligence (AI)** represents a major technological advancement with profound implications in all aspects of modern society. This phenomenon has accelerated with the emergence of generative AIs such as Mistral, ChatGPT, Gemini, etc.\r\n\r\nFrom healthcare to finance, including industry and public services, AI promises considerable benefits.\r\n\r\nHowever, its deployment also raises ethical, social, and legal concerns, leading governments to establish regulations to govern its use.\r\n\r\nThe European Union approved new legislation on **Artificial Intelligence (AI)** in April 2024: the world's first comprehensive law on AI.\r\n\r\n**AI regulation is currently expanding globally**, with diverse approaches depending on legal contexts, priorities (fundamental rights, safety, innovation), and institutional maturity. For instance, in the United States, there are approximately a hundred laws being adopted on various issues (algorithmic discrimination, deepfakes, consumer protection, etc.). States like **Colorado and Utah** have already enacted notable laws.\r\n\r\n## What is the AI Act?\r\n\r\nThe **AI Act** (Regulation (EU) 2024/1689), is a legislation designed to regulate and promote the development and commercialization of artificial intelligence systems within the European Union.\r\n\r\nLaunched by the European Commission in April 2021, the AI Act came into effect on July 12, 2024, after three years of negotiations.\r\n\r\nThis initiative aims to **foster the development of responsible AI, ensuring fundamental rights, safety, and ethical principles while encouraging and strengthening AI investment and innovation throughout the EU.**\r\n\r\n{% button href='https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689' text='Read the full text' role='button' class='btn btn-primary' target='\\_blank' %}\r\n\r\n## Definition of Artificial Intelligence\r\n\r\nArtificial Intelligence (AI) is used to automate tasks, analyze data, make decisions, customize user experiences, and create autonomous systems in various fields such as health, finance, manufacturing, and many others.\r\n\r\nThe development of artificial intelligence involves **the design, training, and optimization of algorithms and computer models to enable a system to simulate human cognitive processes or perform specific tasks autonomously.**\r\n\r\nThe [artificial intelligence system](https://www.dastra.eu/fr/guide/systeme-d-ia/57029) is developed using various learning techniques, the main ones being:\r\n\r\n- **Supervised learning**: in this method, the AI model is trained on a labeled dataset, where each data point is associated with a desired label or output.\r\n- **Unsupervised learning**: the AI model is exposed to unlabeled data and seeks to discover intrinsic structures or patterns within that data.\r\n- **Reinforcement learning**: an agent interacts with a dynamic environment and receives rewards or penalties based on the actions it takes.\r\n\r\n> AI systems can **learn and adapt from data**, whereas traditional tools are limited to executing predefined instructions.\r\n>\r\n> Artificial intelligence **is not merely about executing commands:** it involves the ability to reason and adapt based on experience. > > \\*\\***To understand the difference between an AI model and an AI system, [click here.](https://www.dastra.eu/fr/article/difference-entre-un-systeme-dia-et-un-modele-dia/57625)**\r\n\r\n## Why is AI regulation necessary?\r\n\r\nThe AI Regulation aims to build trust in artificial intelligence technologies. While some AI systems present low risks and help address various societal challenges, others pose real risks.\r\n\r\nThus, the requirements of the AI Act focus on:\r\n\r\n- Targeting specific risks associated with AI (errors, cognitive biases, discrimination, or impacts on data protection)\r\n- Banning AI practices that present unacceptable risks\r\n- Defining clear criteria for AI systems used in these applications\r\n- Imposing specific obligations on users and providers of these applications\r\n- Requiring compliance assessments before deploying or commercializing an AI system\r\n- Monitoring rule enforcement after the commercialization of an AI system\r\n- Establishing a governance structure at both European and national levels.\r\n\r\n## Who is affected by the AI Act?\r\n\r\nThe AI Act applies only to systems and use cases governed by EU law. It regulates the use of AI systems within the EU, **whether developed within the Union or imported from third countries.**\r\n\r\nStakeholders correspond to all actors involved in the lifecycle of an AI system, namely **providers, deployers, importers, and distributors.**\r\n\r\nHowever, **there are some notable exclusions from the scope of the law**(Article 2), such as:\r\n\r\n- Activities for military, defense, or national security purposes;\r\n- AI systems developed and deployed exclusively for scientific research and development purposes;\r\n- The use of AI systems by individuals for strictly personal and non-professional activities.\r\n- Open-source models under certain conditions.\r\n\r\n## Risk levels\r\n\r\nThe approach adopted by the AI Act is risk-based. The regulatory framework establishes four categories of risk for artificial intelligence systems. The aforementioned stakeholders must ensure compliance with the AI Act requirements according to the risk level.\r\n\r\nThis pertains to application-specific systems, for which risk is assessed based on their concrete use case (e.g., human resources), and not[ general-purpose AI models ](https://www.dastra.eu/en/guide/general-purpose-ai-gpai-model/59460)which are treated differently due to their ability to perform distinct, sometimes unpredictable tasks.\r\n\r\n### ![](https://static.dastra.eu/richtext/7b65c307-2ed6-43d6-aaf5-3d707df8712c/image-original.png)Unacceptable risks:\r\n\r\nAI systems and models that present an **unacceptable risk** cannot be marketed or used for export within the European Union. These include AI systems deemed a **clear threat** to safety, livelihoods, and rights of individuals, from government social scoring to toys using voice assistance that encourages dangerous behaviors.\r\n\r\n> Examples: social scoring, widespread biometric identification, deepfakes, content manipulation, etc.)\r\n\r\n### High-risk systems:\r\n\r\nAI systems judged to be high-risk form the core of the AI Act's requirements. They can be divided into two categories.\r\n\r\nThe first corresponds to systems that are **integrated into products themselves covered by existing sectoral safety legislation (e.g., the toy safety directive).** A compliance assessment by a notified third party will be mandatory for these systems.\r\n\r\n**The second corresponds to systems encompassed within the domains listed in Annex III of the AI Act, such as:**\r\n\r\n1. Critical infrastructures such as transportation, which could endanger the life and health of citizens.\r\n2. Education and vocational training.\r\n3. Employment, workforce management, and access to self-employment — for example, the use of CV-sorting software in recruitment processes.\r\n\r\n   **Example: Emotion analysis in the workplace** – The use of AI to analyse emotions or to classify employees biometrically is prohibited due to the risks to privacy and discrimination.\r\n4. Essential public and private services.\r\n\r\n   **Example: Social scoring for commercial purposes** – AI cannot be used to evaluate or rank individuals based on social behaviour or personal characteristics leading to unfair or discriminatory treatment.\r\n5. **Manipulative AI systems** – Any AI exploiting subliminal techniques to significantly influence a person’s behaviour, with a risk of physical or psychological harm. Companies must avoid deploying AI systems that manipulate consumers in harmful ways.\r\n\r\n#### High-risk **systems s**must meet stringent obligations before being allowed on the market, including:\r\n\r\n- Implementing adequate risk assessment and mitigation methods.\r\n- Using high-quality datasets to minimize risks and avoid discriminatory outcomes.\r\n- Keeping a log registry to ensure traceability of outcomes.\r\n- Creating detailed documentation providing all necessary information on the system and its purpose, enabling authorities to assess its compliance.\r\n- Providing clear and appropriate information for users.\r\n- Establishing human oversight measures to minimize risks.\r\n- Maintaining a high level of robustness, safety, and accuracy.\r\n\r\nAll remote biometric identification systems are considered high risk and thus subject to stringent requirements. The use of such systems in public spaces for law enforcement purposes is fundamentally prohibited.\r\n\r\nExceptions are allowed under specific and strictly regulated circumstances, such as preventing an imminent terrorist threat.\r\n\r\n> Note: An AI system that was not initially classified as \"high risk\" may later **acquire** this status in several situations:\r\n>\r\n> - **Substantial modification** of its functionalities or mode of operation: in this case, the entity adapting or redeploying the system is considered a **provider** under the AI Act, with all associated obligations;\r\n>\r\n> - **Evolution of processed data**, altering the nature or impact of the system;\r\n>\r\n> - **Reuse in a high-risk use case**, as defined in Annex III of the AI Act.\r\n\r\n### Limited/Moderate Risks:\r\n\r\nLimited or moderate risk refers to dangers associated with the lack of transparency in the use of artificial intelligence.\r\n\r\nThe AI legislation introduces specific transparency obligations to ensure that individuals are informed when necessary, thus enhancing trust.\r\n\r\n> **Example**: When interacting with AI systems like chatbots, individuals must be informed that they are communicating with a machine, allowing them to make an informed decision about whether to continue or withdraw from their interaction.\r\n\r\nProviders will thus have to **ensure that AI-generated content is identifiable.** Additionally, texts generated by AI and published to inform the public on matters of general interest must be clearly indicated as generated by artificial intelligence.\r\n\r\nThis requirement also applies to audio and video content that may constitute deepfakes.\r\n\r\n### Minimal or no risks:\r\n\r\nThe law allows for the free use of AI presenting minimal risk. Most AI systems currently in use in the EU fall into this category. There are no specific obligations, but adherence to codes of conduct is encouraged.\r\n\r\n**Example:** Video game bots, anti-spam filters, etc.\r\n\r\n## What are the penalties for non-compliance with the AI Act?\r\n\r\nSignificant penalties, similar to those under the GDPR for non-compliance, are foreseen for violations of the AI Act.\r\n\r\nNon-compliance with the regulation may result in sanctions, including administrative fines. The maximum amounts vary depending on the severity of the infringement and the size of the company.\r\n\r\nIn cases of non-compliance with rules concerning unacceptable risks, fines may reach up to **€30 million or 7% of annual global turnover** (whichever is higher). For other violations of the AI Act, fines can amount to **€20 million or 5% of annual global turnover**.\r\n\r\nThe regulation also provides for fines in cases of failure to cooperate with national supervisory authorities in the EU, which may reach **€10 million or 2% of annual global turnover**.\r\n\r\nThis sanction regime will take effect on **2 August 2026**, although the prohibitions under Article 5 will apply earlier, from **2 February 2025**.\r\n\r\nFrom that date, companies breaching these prohibitions may face civil, administrative, or criminal proceedings under other EU laws, such as product liability rules or the GDPR in cases of unlawful processing of personal data.\r\n\r\nNatural and legal persons may lodge a complaint with the relevant market surveillance authority (Article 85) and have the right to an explanation regarding decisions made by AI systems (Article 86).\r\n\r\n## Phased implementation timeline\r\n\r\n- **February 2, 2025:** Implementation of general provisions and the chapter on prohibited practices.\r\n- **August 2, 2025:** Implementation of the chapter concerning notified authorities and notified bodies, the chapter on general-purpose AI, the governance chapter, and the sanctions chapter.\r\n- **August 2, 2026:** General applicability of the regulation.\r\n- **August 2, 2027:** Application of specific obligations related to high-risk AI systems, safety components of products (Article 6 §1).\r\n- **August 2, 2030:** Providers and deployers of high-risk AI systems intended for use by public authorities must comply with the requirements and obligations of the regulation.\r\n\r\n## Governance\r\n\r\nThe AI Act relies on a **two-tier institutional architecture** – national and European – to ensure a harmonized application of the regulations across the Union.\r\n\r\n### 1. At the national level: supervisory authorities\r\n\r\nEach member state must designate **national competent authorities** responsible for:\r\n\r\n- Monitoring the market and controlling AI systems,\r\n- Verifying compliance assessments,\r\n- Designating and supervising notified bodies authorized to conduct audits,\r\n- Enforcing sanctions in cases of non-compliance.\r\n\r\nThe designation of the national competent authority is expected by **August 2025**. This authority will need to closely collaborate with the European AI Office to ensure coherence of implementation.\r\n\r\n### 2. At the European level: centralized management\r\n\r\nWithin the European Commission, the **EU AI Office** serves as the main institution for oversight, especially regarding general-purpose AI models.\r\n\r\nIt relies on two advisory bodies:\r\n\r\n- **The AI Board**, which brings together member states, civil society, economic actors, and academics, to inform regulatory directions and integrate a diversity of viewpoints.\r\n\r\n- **The Scientific Advisory Panel**, composed of independent experts, tasked with identifying systemic risks, issuing technical recommendations, and contributing to the definition of classification criteria for models.\r\n\r\n### 3. Objective\r\n\r\nThis system aims to ensure rigorous, transparent, and scientifically-based governance to support businesses and citizens within an ever-evolving regulatory framework.\r\n\r\n## Meeting the requirements of the AI Act with Dastra\r\n\r\nOur [Dastra software](https://www.dastra.eu/fr/product-features/ai-governance) will help you easily establish compliance with the **AI Act regulations**. Dastra now includes a comprehensive register of AI systems with integrated risk analysis, asset mapping, data, and relevant AI models.\r\n\r\n{% button href='https://www.dastra.eu/fr/contacts/demo' text='Talk to an expert' role='button' class='btn btn-primary' target='\\_blank' %}","\u003Cp>\u003Cstrong>Artificial Intelligence (AI)\u003C/strong> represents a major technological advancement with profound implications in all aspects of modern society. This phenomenon has accelerated with the emergence of generative AIs such as Mistral, ChatGPT, Gemini, etc.\u003C/p>\r\n\u003Cp>From healthcare to finance, including industry and public services, AI promises considerable benefits.\u003C/p>\r\n\u003Cp>However, its deployment also raises ethical, social, and legal concerns, leading governments to establish regulations to govern its use.\u003C/p>\r\n\u003Cp>The European Union approved new legislation on \u003Cstrong>Artificial Intelligence (AI)\u003C/strong> in April 2024: the world's first comprehensive law on AI.\u003C/p>\r\n\u003Cp>\u003Cstrong>AI regulation is currently expanding globally\u003C/strong>, with diverse approaches depending on legal contexts, priorities (fundamental rights, safety, innovation), and institutional maturity. For instance, in the United States, there are approximately a hundred laws being adopted on various issues (algorithmic discrimination, deepfakes, consumer protection, etc.). States like \u003Cstrong>Colorado and Utah\u003C/strong> have already enacted notable laws.\u003C/p>\r\n\u003Ch2 id=\"what-is-the-ai-act\">What is the AI Act?\u003C/h2>\r\n\u003Cp>The \u003Cstrong>AI Act\u003C/strong> (Regulation (EU) 2024/1689), is a legislation designed to regulate and promote the development and commercialization of artificial intelligence systems within the European Union.\u003C/p>\r\n\u003Cp>Launched by the European Commission in April 2021, the AI Act came into effect on July 12, 2024, after three years of negotiations.\u003C/p>\r\n\u003Cp>This initiative aims to \u003Cstrong>foster the development of responsible AI, ensuring fundamental rights, safety, and ethical principles while encouraging and strengthening AI investment and innovation throughout the EU.\u003C/strong>\u003C/p>\r\n\u003Cdiv class=\"content-btn-container\">\u003Ca>\u003C/a>\u003C/div>\r\n\u003Ch2 id=\"definition-of-artificial-intelligence\">Definition of Artificial Intelligence\u003C/h2>\r\n\u003Cp>Artificial Intelligence (AI) is used to automate tasks, analyze data, make decisions, customize user experiences, and create autonomous systems in various fields such as health, finance, manufacturing, and many others.\u003C/p>\r\n\u003Cp>The development of artificial intelligence involves \u003Cstrong>the design, training, and optimization of algorithms and computer models to enable a system to simulate human cognitive processes or perform specific tasks autonomously.\u003C/strong>\u003C/p>\r\n\u003Cp>The \u003Ca href=\"https://www.dastra.eu/fr/guide/systeme-d-ia/57029\">artificial intelligence system\u003C/a> is developed using various learning techniques, the main ones being:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cstrong>Supervised learning\u003C/strong>: in this method, the AI model is trained on a labeled dataset, where each data point is associated with a desired label or output.\u003C/li>\r\n\u003Cli>\u003Cstrong>Unsupervised learning\u003C/strong>: the AI model is exposed to unlabeled data and seeks to discover intrinsic structures or patterns within that data.\u003C/li>\r\n\u003Cli>\u003Cstrong>Reinforcement learning\u003C/strong>: an agent interacts with a dynamic environment and receives rewards or penalties based on the actions it takes.\u003C/li>\r\n\u003C/ul>\r\n\u003Cblockquote>\r\n\u003Cp>AI systems can \u003Cstrong>learn and adapt from data\u003C/strong>, whereas traditional tools are limited to executing predefined instructions.\u003C/p>\r\n\u003Cp>Artificial intelligence \u003Cstrong>is not merely about executing commands:\u003C/strong> it involves the ability to reason and adapt based on experience. \u003Cbr />\r\n\u003Cbr />\r\n**\u003Cstrong>To understand the difference between an AI model and an AI system, \u003Ca href=\"https://www.dastra.eu/fr/article/difference-entre-un-systeme-dia-et-un-modele-dia/57625\">click here.\u003C/a>\u003C/strong>\u003C/p>\r\n\u003C/blockquote>\r\n\u003Ch2 id=\"why-is-ai-regulation-necessary\">Why is AI regulation necessary?\u003C/h2>\r\n\u003Cp>The AI Regulation aims to build trust in artificial intelligence technologies. While some AI systems present low risks and help address various societal challenges, others pose real risks.\u003C/p>\r\n\u003Cp>Thus, the requirements of the AI Act focus on:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>Targeting specific risks associated with AI (errors, cognitive biases, discrimination, or impacts on data protection)\u003C/li>\r\n\u003Cli>Banning AI practices that present unacceptable risks\u003C/li>\r\n\u003Cli>Defining clear criteria for AI systems used in these applications\u003C/li>\r\n\u003Cli>Imposing specific obligations on users and providers of these applications\u003C/li>\r\n\u003Cli>Requiring compliance assessments before deploying or commercializing an AI system\u003C/li>\r\n\u003Cli>Monitoring rule enforcement after the commercialization of an AI system\u003C/li>\r\n\u003Cli>Establishing a governance structure at both European and national levels.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch2 id=\"who-is-affected-by-the-ai-act\">Who is affected by the AI Act?\u003C/h2>\r\n\u003Cp>The AI Act applies only to systems and use cases governed by EU law. It regulates the use of AI systems within the EU, \u003Cstrong>whether developed within the Union or imported from third countries.\u003C/strong>\u003C/p>\r\n\u003Cp>Stakeholders correspond to all actors involved in the lifecycle of an AI system, namely \u003Cstrong>providers, deployers, importers, and distributors.\u003C/strong>\u003C/p>\r\n\u003Cp>However, \u003Cstrong>there are some notable exclusions from the scope of the law\u003C/strong>(Article 2), such as:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>Activities for military, defense, or national security purposes;\u003C/li>\r\n\u003Cli>AI systems developed and deployed exclusively for scientific research and development purposes;\u003C/li>\r\n\u003Cli>The use of AI systems by individuals for strictly personal and non-professional activities.\u003C/li>\r\n\u003Cli>Open-source models under certain conditions.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch2 id=\"risk-levels\">Risk levels\u003C/h2>\r\n\u003Cp>The approach adopted by the AI Act is risk-based. The regulatory framework establishes four categories of risk for artificial intelligence systems. The aforementioned stakeholders must ensure compliance with the AI Act requirements according to the risk level.\u003C/p>\r\n\u003Cp>This pertains to application-specific systems, for which risk is assessed based on their concrete use case (e.g., human resources), and not\u003Ca href=\"https://www.dastra.eu/en/guide/general-purpose-ai-gpai-model/59460\"> general-purpose AI models \u003C/a>which are treated differently due to their ability to perform distinct, sometimes unpredictable tasks.\u003C/p>\r\n\u003Ch3 id=\"unacceptable-risks\">\u003Cimg loading=\"lazy\"  src=\"https://static.dastra.eu/richtext/7b65c307-2ed6-43d6-aaf5-3d707df8712c/image-original.png\" alt=\"\" />Unacceptable risks:\u003C/h3>\r\n\u003Cp>AI systems and models that present an \u003Cstrong>unacceptable risk\u003C/strong> cannot be marketed or used for export within the European Union. These include AI systems deemed a \u003Cstrong>clear threat\u003C/strong> to safety, livelihoods, and rights of individuals, from government social scoring to toys using voice assistance that encourages dangerous behaviors.\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>Examples: social scoring, widespread biometric identification, deepfakes, content manipulation, etc.)\u003C/p>\r\n\u003C/blockquote>\r\n\u003Ch3 id=\"high-risk-systems\">High-risk systems:\u003C/h3>\r\n\u003Cp>AI systems judged to be high-risk form the core of the AI Act's requirements. They can be divided into two categories.\u003C/p>\r\n\u003Cp>The first corresponds to systems that are \u003Cstrong>integrated into products themselves covered by existing sectoral safety legislation (e.g., the toy safety directive).\u003C/strong> A compliance assessment by a notified third party will be mandatory for these systems.\u003C/p>\r\n\u003Cp>\u003Cstrong>The second corresponds to systems encompassed within the domains listed in Annex III of the AI Act, such as:\u003C/strong>\u003C/p>\r\n\u003Col>\r\n\u003Cli>\u003Cp>Critical infrastructures such as transportation, which could endanger the life and health of citizens.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Education and vocational training.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Employment, workforce management, and access to self-employment — for example, the use of CV-sorting software in recruitment processes.\u003C/p>\r\n\u003Cp>\u003Cstrong>Example: Emotion analysis in the workplace\u003C/strong> – The use of AI to analyse emotions or to classify employees biometrically is prohibited due to the risks to privacy and discrimination.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Essential public and private services.\u003C/p>\r\n\u003Cp>\u003Cstrong>Example: Social scoring for commercial purposes\u003C/strong> – AI cannot be used to evaluate or rank individuals based on social behaviour or personal characteristics leading to unfair or discriminatory treatment.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Manipulative AI systems\u003C/strong> – Any AI exploiting subliminal techniques to significantly influence a person’s behaviour, with a risk of physical or psychological harm. Companies must avoid deploying AI systems that manipulate consumers in harmful ways.\u003C/p>\r\n\u003C/li>\r\n\u003C/ol>\r\n\u003Ch4 id=\"high-risk-systems-smust-meet-stringent-obligations-before-being-allowed-on-the-market-including\">High-risk \u003Cstrong>systems s\u003C/strong>must meet stringent obligations before being allowed on the market, including:\u003C/h4>\r\n\u003Cul>\r\n\u003Cli>Implementing adequate risk assessment and mitigation methods.\u003C/li>\r\n\u003Cli>Using high-quality datasets to minimize risks and avoid discriminatory outcomes.\u003C/li>\r\n\u003Cli>Keeping a log registry to ensure traceability of outcomes.\u003C/li>\r\n\u003Cli>Creating detailed documentation providing all necessary information on the system and its purpose, enabling authorities to assess its compliance.\u003C/li>\r\n\u003Cli>Providing clear and appropriate information for users.\u003C/li>\r\n\u003Cli>Establishing human oversight measures to minimize risks.\u003C/li>\r\n\u003Cli>Maintaining a high level of robustness, safety, and accuracy.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>All remote biometric identification systems are considered high risk and thus subject to stringent requirements. The use of such systems in public spaces for law enforcement purposes is fundamentally prohibited.\u003C/p>\r\n\u003Cp>Exceptions are allowed under specific and strictly regulated circumstances, such as preventing an imminent terrorist threat.\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>Note: An AI system that was not initially classified as \"high risk\" may later \u003Cstrong>acquire\u003C/strong> this status in several situations:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>\u003Cstrong>Substantial modification\u003C/strong> of its functionalities or mode of operation: in this case, the entity adapting or redeploying the system is considered a \u003Cstrong>provider\u003C/strong> under the AI Act, with all associated obligations;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Evolution of processed data\u003C/strong>, altering the nature or impact of the system;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Reuse in a high-risk use case\u003C/strong>, as defined in Annex III of the AI Act.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003C/blockquote>\r\n\u003Ch3 id=\"limitedmoderate-risks\">Limited/Moderate Risks:\u003C/h3>\r\n\u003Cp>Limited or moderate risk refers to dangers associated with the lack of transparency in the use of artificial intelligence.\u003C/p>\r\n\u003Cp>The AI legislation introduces specific transparency obligations to ensure that individuals are informed when necessary, thus enhancing trust.\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>\u003Cstrong>Example\u003C/strong>: When interacting with AI systems like chatbots, individuals must be informed that they are communicating with a machine, allowing them to make an informed decision about whether to continue or withdraw from their interaction.\u003C/p>\r\n\u003C/blockquote>\r\n\u003Cp>Providers will thus have to \u003Cstrong>ensure that AI-generated content is identifiable.\u003C/strong> Additionally, texts generated by AI and published to inform the public on matters of general interest must be clearly indicated as generated by artificial intelligence.\u003C/p>\r\n\u003Cp>This requirement also applies to audio and video content that may constitute deepfakes.\u003C/p>\r\n\u003Ch3 id=\"minimal-or-no-risks\">Minimal or no risks:\u003C/h3>\r\n\u003Cp>The law allows for the free use of AI presenting minimal risk. Most AI systems currently in use in the EU fall into this category. There are no specific obligations, but adherence to codes of conduct is encouraged.\u003C/p>\r\n\u003Cp>\u003Cstrong>Example:\u003C/strong> Video game bots, anti-spam filters, etc.\u003C/p>\r\n\u003Ch2 id=\"what-are-the-penalties-for-non-compliance-with-the-ai-act\">What are the penalties for non-compliance with the AI Act?\u003C/h2>\r\n\u003Cp>Significant penalties, similar to those under the GDPR for non-compliance, are foreseen for violations of the AI Act.\u003C/p>\r\n\u003Cp>Non-compliance with the regulation may result in sanctions, including administrative fines. The maximum amounts vary depending on the severity of the infringement and the size of the company.\u003C/p>\r\n\u003Cp>In cases of non-compliance with rules concerning unacceptable risks, fines may reach up to \u003Cstrong>€30 million or 7% of annual global turnover\u003C/strong> (whichever is higher). For other violations of the AI Act, fines can amount to \u003Cstrong>€20 million or 5% of annual global turnover\u003C/strong>.\u003C/p>\r\n\u003Cp>The regulation also provides for fines in cases of failure to cooperate with national supervisory authorities in the EU, which may reach \u003Cstrong>€10 million or 2% of annual global turnover\u003C/strong>.\u003C/p>\r\n\u003Cp>This sanction regime will take effect on \u003Cstrong>2 August 2026\u003C/strong>, although the prohibitions under Article 5 will apply earlier, from \u003Cstrong>2 February 2025\u003C/strong>.\u003C/p>\r\n\u003Cp>From that date, companies breaching these prohibitions may face civil, administrative, or criminal proceedings under other EU laws, such as product liability rules or the GDPR in cases of unlawful processing of personal data.\u003C/p>\r\n\u003Cp>Natural and legal persons may lodge a complaint with the relevant market surveillance authority (Article 85) and have the right to an explanation regarding decisions made by AI systems (Article 86).\u003C/p>\r\n\u003Ch2 id=\"phased-implementation-timeline\">Phased implementation timeline\u003C/h2>\r\n\u003Cul>\r\n\u003Cli>\u003Cstrong>February 2, 2025:\u003C/strong> Implementation of general provisions and the chapter on prohibited practices.\u003C/li>\r\n\u003Cli>\u003Cstrong>August 2, 2025:\u003C/strong> Implementation of the chapter concerning notified authorities and notified bodies, the chapter on general-purpose AI, the governance chapter, and the sanctions chapter.\u003C/li>\r\n\u003Cli>\u003Cstrong>August 2, 2026:\u003C/strong> General applicability of the regulation.\u003C/li>\r\n\u003Cli>\u003Cstrong>August 2, 2027:\u003C/strong> Application of specific obligations related to high-risk AI systems, safety components of products (Article 6 §1).\u003C/li>\r\n\u003Cli>\u003Cstrong>August 2, 2030:\u003C/strong> Providers and deployers of high-risk AI systems intended for use by public authorities must comply with the requirements and obligations of the regulation.\u003C/li>\r\n\u003C/ul>\r\n\u003Ch2 id=\"governance\">Governance\u003C/h2>\r\n\u003Cp>The AI Act relies on a \u003Cstrong>two-tier institutional architecture\u003C/strong> – national and European – to ensure a harmonized application of the regulations across the Union.\u003C/p>\r\n\u003Ch3 id=\"at-the-national-level-supervisory-authorities\">1. At the national level: supervisory authorities\u003C/h3>\r\n\u003Cp>Each member state must designate \u003Cstrong>national competent authorities\u003C/strong> responsible for:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>Monitoring the market and controlling AI systems,\u003C/li>\r\n\u003Cli>Verifying compliance assessments,\u003C/li>\r\n\u003Cli>Designating and supervising notified bodies authorized to conduct audits,\u003C/li>\r\n\u003Cli>Enforcing sanctions in cases of non-compliance.\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>The designation of the national competent authority is expected by \u003Cstrong>August 2025\u003C/strong>. This authority will need to closely collaborate with the European AI Office to ensure coherence of implementation.\u003C/p>\r\n\u003Ch3 id=\"at-the-european-level-centralized-management\">2. At the European level: centralized management\u003C/h3>\r\n\u003Cp>Within the European Commission, the \u003Cstrong>EU AI Office\u003C/strong> serves as the main institution for oversight, especially regarding general-purpose AI models.\u003C/p>\r\n\u003Cp>It relies on two advisory bodies:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>\u003Cstrong>The AI Board\u003C/strong>, which brings together member states, civil society, economic actors, and academics, to inform regulatory directions and integrate a diversity of viewpoints.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>The Scientific Advisory Panel\u003C/strong>, composed of independent experts, tasked with identifying systemic risks, issuing technical recommendations, and contributing to the definition of classification criteria for models.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Ch3 id=\"objective\">3. Objective\u003C/h3>\r\n\u003Cp>This system aims to ensure rigorous, transparent, and scientifically-based governance to support businesses and citizens within an ever-evolving regulatory framework.\u003C/p>\r\n\u003Ch2 id=\"meeting-the-requirements-of-the-ai-act-with-dastra\">Meeting the requirements of the AI Act with Dastra\u003C/h2>\r\n\u003Cp>Our \u003Ca href=\"https://www.dastra.eu/fr/product-features/ai-governance\">Dastra software\u003C/a> will help you easily establish compliance with the \u003Cstrong>AI Act regulations\u003C/strong>. Dastra now includes a comprehensive register of AI systems with integrated risk analysis, asset mapping, data, and relevant AI models.\u003C/p>\r\n\u003Cdiv class=\"content-btn-container\">\u003Ca>\u003C/a>\u003C/div>\r\n","AI Act: key points of the regulation ","Understand the AI Act in minutes: essential points of the EU’s new AI regulation and what it means for organisations.",2054,11,0,null,"en","ai-act-key-points-of-the-regulation-at-a-glance","Published",{"id":17,"displayName":18,"avatarUrl":19,"bio":12,"blogUrl":12,"color":12,"userId":17,"creationDate":20},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2025-08-22T13:31:00","2025-08-24T13:31:11.4925728","2025-08-25T11:13:55.7266038",{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":30},2,"Blog","A list of curated articles provided by the community","article","#28449a",[31,34,37],{"lang":32,"name":26,"description":33},"fr","Une liste d'articles rédigés par la communauté",{"lang":35,"name":26,"description":36},"es","Una lista de artículos escritos por la comunidad",{"lang":38,"name":26,"description":39},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[41],{"id":25,"name":26,"description":27,"url":28,"color":29,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":42},[43,44,45],{"lang":32,"name":26,"description":33},{"lang":35,"name":26,"description":36},{"lang":38,"name":26,"description":39},[],"https://static.dastra.eu/content/271df420-e9c5-4f7d-be2a-bc1c26b6a8db/visuel-article-26-original.jpg",[49,50,51,52,53,54,55],"https://static.dastra.eu/content/271df420-e9c5-4f7d-be2a-bc1c26b6a8db/visuel-article-26-1000.webp","https://static.dastra.eu/content/271df420-e9c5-4f7d-be2a-bc1c26b6a8db/visuel-article-26.webp","https://static.dastra.eu/content/271df420-e9c5-4f7d-be2a-bc1c26b6a8db/visuel-article-26-1500.webp","https://static.dastra.eu/content/271df420-e9c5-4f7d-be2a-bc1c26b6a8db/visuel-article-26-800.webp","https://static.dastra.eu/content/271df420-e9c5-4f7d-be2a-bc1c26b6a8db/visuel-article-26-600.webp","https://static.dastra.eu/content/271df420-e9c5-4f7d-be2a-bc1c26b6a8db/visuel-article-26-300.webp","https://static.dastra.eu/content/271df420-e9c5-4f7d-be2a-bc1c26b6a8db/visuel-article-26-100.webp",59538]