[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article_59541":3},{"tableOfContents":4,"markDownContent":5,"htmlContent":6,"metaTitle":7,"metaDescription":8,"wordCount":9,"readTime":10,"title":7,"nbDownloads":11,"excerpt":12,"lang":13,"url":14,"intro":15,"featured":4,"state":16,"author":17,"authorId":18,"datePublication":22,"dateCreation":23,"dateUpdate":24,"mainCategory":25,"categories":41,"metaDatas":67,"imageUrl":68,"imageThumbUrls":69,"id":77},false,"Since February 2, 2025, the AI Act requires providers and deployers of artificial intelligence systems to ensure that their personnel and users of these systems have a sufficient level of knowledge and a good understanding of AI [(Article 4 of the AI Act).](https://artificialintelligenceact.eu/fr/article/4/#:~:text=Article%204%20%3A%20Ma%C3%AEtrise%20de%20l'IA,-Date%20d'entr%C3%A9e&text=Cet%20article%20indique%20que%20les,bien%20form%C3%A9s%20%C3%A0%20l'IA.)\r\n\r\nThis requirement varies based on technical skills, experience, level of education, and the context of AI system use, as well as the individuals or groups involved.\r\n\r\n## What are the consequences of failing to meet AI literacy obligations?\r\n\r\nThe obligation to promote AI literacy under Article 4 of the AI Act has been in force since **February 2, 2025**. However, enforcement by competent national authorities will only begin in **August 2025**, as Member States have until that date to formally designate these authorities.\r\n\r\nAlthough Article 4 does not attach explicit fines to AI literacy, regulators may treat **non-compliance as an aggravating factor** in broader investigations, particularly where organizations fail to demonstrate due diligence in areas such as bias management. Conversely, evidence of even basic training programs can strengthen a company’s defense during audits or litigation.\r\n\r\nFrom **August 2, 2026**, when the penalty regime takes effect, providers and deployers of AI systems risk **civil liability** if the absence of adequate training leads to harm suffered by consumers, business partners, or third parties.\r\n\r\n## What can organizations do right now?\r\n\r\nAI proficiency should be seen as a **core governance tool**, not just a compliance checkbox. It is about ensuring that employees understand the:\r\n\r\n- Sensitive data exposed to external platforms, sometimes located in foreign jurisdictions;\r\n\r\n- data transfers to foreign territories without adequate oversight;\r\n\r\n- increased exposure to data breaches and litigation;\r\n\r\n- reputational damage in the event of public incidents and loss of customer trust;\r\n\r\n- blind spots in risk management, with traceability and auditing becoming impossible when using uncontrolled tools.\r\n\r\nThere is no one-size-fits-all model. **Training content must vary according to roles, levels of responsibility, and specific use cases.**\r\n\r\nThe European Commission emphasizes a **risk-proportionate approach:** the more critical or sensitive the system, the more thorough, structured, and supervised the training must be. What matters most is that each audience receives sufficient and relevant information to properly manage the use of AI.\r\n\r\n> The 'Living repository' of the AI Office supports the implementation of Article 4 by sharing examples and practices.\r\n>\r\n> While **using these examples does not automatically establish compliance,** they encourage learning and consistency across the market.\r\n\r\n## Practical steps to improve AI proficiency\r\n\r\n- **Assess training needs**: Audit existing programs to identify gaps in knowledge.\r\n\r\n- **Adopt a tiered approach**: Provide baseline training to all employees, then introduce role-specific modules.\r\n\r\n  - Developers: spotting bias in code.\r\n\r\n  - Executives: interpreting AI risk reports.\r\n\r\n  - Sales teams: knowing what *not* to promise to clients.\r\n\r\n- **Run crisis simulations**: e.g., *“Our chatbot leaked customer data—what do we do?”*\r\n\r\n- **Document initiatives**: Keep thorough records of all training to support accountability in audits.\r\n\r\n## Risks of poor AI literacy: Shadow AI\r\n\r\nWithout adequate literacy, organizations face the rise of **Shadow AI**—the unauthorized use of AI tools by employees without oversight from IT, legal, or compliance teams. This mirrors Shadow IT but comes with **AI-specific risks**:\r\n\r\n- leaks of sensitive data through unsecured external tools,\r\n\r\n- unauthorized cross-border data transfers,\r\n\r\n- increased exposure to breaches, litigation, and reputational harm.\r\n\r\nShadow AI, which mirrors the issues of Shadow IT while adding specific risks related to AI, is a **early warning signal of a gap** between the speed of AI innovation and organizational governance.\r\n\r\n> Real-world examples:\r\n>\r\n> - **Internal security incident**: a company like Samsung saw its proprietary code leak after engineers shared it with ChatGPT.\r\n>\r\n> - **Liability deficits**: a large law firm had to publish guidelines on its AI proficiency after some lawyers were unable to justify their sources during AI-assisted legal research.\r\n\r\n## How to assess and manage shadow AI\r\n\r\n**Strong & robust governance is essential to tackle Shadow AI.**\r\n\r\n{% button href=\"https://www.dastra.eu/en/guide/how-to-get-started-with-ai-governance/59299\" text=\"Not sure where to start? Click here! \" target=\"\\_blank\" role=\"button\" class=\"btn btn-primary\" %}\r\n\r\nHere a few helpful measures against Shadow AI:\r\n\r\n1. **Launch a confidential survey among your employees** with key questions (*What AI tools do you use? What types of data do you share? How do you integrate AI results into your deliverables?*). Allow a disclosure period without penalties.\r\n\r\n2. **Engage with departments**: meet with managers to identify used tools, approval processes, and the perceived value of AI.\r\n\r\n3. **Establish graded access zones**:\r\n\r\n   - Green zone: non-sensitive data, pre-approved tools;\r\n\r\n   - Yellow zone: prior review required;\r\n\r\n   - Red zone: strict prohibition (e.g., fully autonomous decision-making systems).          Access to certain zones should be contingent on mandatory prior training.\r\n\r\n4. Provide employees **with approved alternatives**: Provide secure, validated tools to reduce unauthorized usage;\r\n\r\n5. **Pilot programs**: Start with one department, empower “AI champions,” then scale organization-wide.\r\n\r\n6. Involve **lawyers and compliance officers from the design stage** of projects;\r\n\r\n7. **Develop analytical tools**: monitor the adoption, compliance, and business impact of AI within the organization.\r\n\r\n---\r\n\r\n**Bottom line**: Ultimately, Shadow AI highlights a growing gap between the speed of artificial intelligence adoption and companies’ ability to properly regulate its use. Without clear policies, training, and secure solutions, innovation develops in the shadows, exposing organizations to increasingly critical legal, financial, reputational, and operational risks.","\u003Cp>Since February 2, 2025, the AI Act requires providers and deployers of artificial intelligence systems to ensure that their personnel and users of these systems have a sufficient level of knowledge and a good understanding of AI \u003Ca href=\"https://artificialintelligenceact.eu/fr/article/4/#:%7E:text=Article%204%20%3A%20Ma%C3%AEtrise%20de%20l%27IA,-Date%20d%27entr%C3%A9e&amp;text=Cet%20article%20indique%20que%20les,bien%20form%C3%A9s%20%C3%A0%20l%27IA.\" rel=\"nofollow\">(Article 4 of the AI Act).\u003C/a>\u003C/p>\r\n\u003Cp>This requirement varies based on technical skills, experience, level of education, and the context of AI system use, as well as the individuals or groups involved.\u003C/p>\r\n\u003Ch2 id=\"what-are-the-consequences-of-failing-to-meet-ai-literacy-obligations\">What are the consequences of failing to meet AI literacy obligations?\u003C/h2>\r\n\u003Cp>The obligation to promote AI literacy under Article 4 of the AI Act has been in force since \u003Cstrong>February 2, 2025\u003C/strong>. However, enforcement by competent national authorities will only begin in \u003Cstrong>August 2025\u003C/strong>, as Member States have until that date to formally designate these authorities.\u003C/p>\r\n\u003Cp>Although Article 4 does not attach explicit fines to AI literacy, regulators may treat \u003Cstrong>non-compliance as an aggravating factor\u003C/strong> in broader investigations, particularly where organizations fail to demonstrate due diligence in areas such as bias management. Conversely, evidence of even basic training programs can strengthen a company’s defense during audits or litigation.\u003C/p>\r\n\u003Cp>From \u003Cstrong>August 2, 2026\u003C/strong>, when the penalty regime takes effect, providers and deployers of AI systems risk \u003Cstrong>civil liability\u003C/strong> if the absence of adequate training leads to harm suffered by consumers, business partners, or third parties.\u003C/p>\r\n\u003Ch2 id=\"what-can-organizations-do-right-now\">What can organizations do right now?\u003C/h2>\r\n\u003Cp>AI proficiency should be seen as a \u003Cstrong>core governance tool\u003C/strong>, not just a compliance checkbox. It is about ensuring that employees understand the:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Sensitive data exposed to external platforms, sometimes located in foreign jurisdictions;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>data transfers to foreign territories without adequate oversight;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>increased exposure to data breaches and litigation;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>reputational damage in the event of public incidents and loss of customer trust;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>blind spots in risk management, with traceability and auditing becoming impossible when using uncontrolled tools.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>There is no one-size-fits-all model. \u003Cstrong>Training content must vary according to roles, levels of responsibility, and specific use cases.\u003C/strong>\u003C/p>\r\n\u003Cp>The European Commission emphasizes a \u003Cstrong>risk-proportionate approach:\u003C/strong> the more critical or sensitive the system, the more thorough, structured, and supervised the training must be. What matters most is that each audience receives sufficient and relevant information to properly manage the use of AI.\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>The 'Living repository' of the AI Office supports the implementation of Article 4 by sharing examples and practices.\u003C/p>\r\n\u003Cp>While \u003Cstrong>using these examples does not automatically establish compliance,\u003C/strong> they encourage learning and consistency across the market.\u003C/p>\r\n\u003C/blockquote>\r\n\u003Ch2 id=\"practical-steps-to-improve-ai-proficiency\">Practical steps to improve AI proficiency\u003C/h2>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>\u003Cstrong>Assess training needs\u003C/strong>: Audit existing programs to identify gaps in knowledge.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Adopt a tiered approach\u003C/strong>: Provide baseline training to all employees, then introduce role-specific modules.\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Developers: spotting bias in code.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Executives: interpreting AI risk reports.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Sales teams: knowing what \u003Cem>not\u003C/em> to promise to clients.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Run crisis simulations\u003C/strong>: e.g., \u003Cem>“Our chatbot leaked customer data—what do we do?”\u003C/em>\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Document initiatives\u003C/strong>: Keep thorough records of all training to support accountability in audits.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Ch2 id=\"risks-of-poor-ai-literacy-shadow-ai\">Risks of poor AI literacy: Shadow AI\u003C/h2>\r\n\u003Cp>Without adequate literacy, organizations face the rise of \u003Cstrong>Shadow AI\u003C/strong>—the unauthorized use of AI tools by employees without oversight from IT, legal, or compliance teams. This mirrors Shadow IT but comes with \u003Cstrong>AI-specific risks\u003C/strong>:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>leaks of sensitive data through unsecured external tools,\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>unauthorized cross-border data transfers,\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>increased exposure to breaches, litigation, and reputational harm.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003Cp>Shadow AI, which mirrors the issues of Shadow IT while adding specific risks related to AI, is a \u003Cstrong>early warning signal of a gap\u003C/strong> between the speed of AI innovation and organizational governance.\u003C/p>\r\n\u003Cblockquote>\r\n\u003Cp>Real-world examples:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>\u003Cstrong>Internal security incident\u003C/strong>: a company like Samsung saw its proprietary code leak after engineers shared it with ChatGPT.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Liability deficits\u003C/strong>: a large law firm had to publish guidelines on its AI proficiency after some lawyers were unable to justify their sources during AI-assisted legal research.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003C/blockquote>\r\n\u003Ch2 id=\"how-to-assess-and-manage-shadow-ai\">How to assess and manage shadow AI\u003C/h2>\r\n\u003Cp>\u003Cstrong>Strong &amp; robust governance is essential to tackle Shadow AI.\u003C/strong>\u003C/p>\r\n\u003Cdiv class=\"content-btn-container\">\u003Ca href=\"https://www.dastra.eu/en/guide/how-to-get-started-with-ai-governance/59299\" target=\"_blank\" role=\"button\" class=\"btn btn-primary\">Not sure where to start? Click here! \u003C/a>\u003C/div>\r\n\u003Cp>Here a few helpful measures against Shadow AI:\u003C/p>\r\n\u003Col>\r\n\u003Cli>\u003Cp>\u003Cstrong>Launch a confidential survey among your employees\u003C/strong> with key questions (\u003Cem>What AI tools do you use? What types of data do you share? How do you integrate AI results into your deliverables?\u003C/em>). Allow a disclosure period without penalties.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Engage with departments\u003C/strong>: meet with managers to identify used tools, approval processes, and the perceived value of AI.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Establish graded access zones\u003C/strong>:\u003C/p>\r\n\u003Cul>\r\n\u003Cli>\u003Cp>Green zone: non-sensitive data, pre-approved tools;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Yellow zone: prior review required;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Red zone: strict prohibition (e.g., fully autonomous decision-making systems).\u003Cbr />\r\n\u003Cbr />\r\nAccess to certain zones should be contingent on mandatory prior training.\u003C/p>\r\n\u003C/li>\r\n\u003C/ul>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Provide employees \u003Cstrong>with approved alternatives\u003C/strong>: Provide secure, validated tools to reduce unauthorized usage;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Pilot programs\u003C/strong>: Start with one department, empower “AI champions,” then scale organization-wide.\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>Involve \u003Cstrong>lawyers and compliance officers from the design stage\u003C/strong> of projects;\u003C/p>\r\n\u003C/li>\r\n\u003Cli>\u003Cp>\u003Cstrong>Develop analytical tools\u003C/strong>: monitor the adoption, compliance, and business impact of AI within the organization.\u003C/p>\r\n\u003C/li>\r\n\u003C/ol>\r\n\u003Chr />\r\n\u003Cp>\u003Cstrong>Bottom line\u003C/strong>: Ultimately, Shadow AI highlights a growing gap between the speed of artificial intelligence adoption and companies’ ability to properly regulate its use. Without clear policies, training, and secure solutions, innovation develops in the shadows, exposing organizations to increasingly critical legal, financial, reputational, and operational risks.\u003C/p>\r\n","AI literacy: the weapon against Shadow AI","Shadow AI exposes businesses to legal, ethical, and security risks. Learn how AI literacy and strong governance frameworks help prevent misuse and ensure respon",917,5,0,null,"en","ai-literacy-the-weapon-against-shadow-ai","Shadow AI exposes businesses to legal, ethical, and security risks. Learn how AI literacy and strong governance frameworks help prevent misuse and ensure responsible adoption.","Published",{"id":18,"displayName":19,"avatarUrl":20,"bio":12,"blogUrl":12,"color":12,"userId":18,"creationDate":21},20352,"Leïla Sayssa","https://static.dastra.eu/tenant-3/avatar/20352/TDYeY3C8Rz1lLE/dpo-avatar-h01-150.png","2025-03-03T11:08:22","2025-08-27T08:00:00","2025-08-25T11:55:42.8650746","2025-11-21T13:21:07.2946817",{"id":26,"name":27,"description":28,"url":29,"color":30,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":31},2,"Blog","A list of curated articles provided by the community","article","#28449a",[32,35,38],{"lang":33,"name":27,"description":34},"fr","Une liste d'articles rédigés par la communauté",{"lang":36,"name":27,"description":37},"es","Una lista de artículos escritos por la comunidad",{"lang":39,"name":27,"description":40},"de","Eine Liste von Artikeln, die von der Community verfasst wurden",[42,47],{"id":26,"name":27,"description":28,"url":29,"color":30,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":43},[44,45,46],{"lang":33,"name":27,"description":34},{"lang":36,"name":27,"description":37},{"lang":39,"name":27,"description":40},{"id":48,"name":49,"description":50,"url":51,"color":52,"parentId":26,"count":12,"imageUrl":12,"parent":53,"order":10,"translations":58},69,"Expertise","Gain insights from our experts on GDPR compliance, data protection, and privacy challenges. In-depth articles, professional analysis, and real-world best practices.","indepth","#000000",{"id":26,"name":27,"description":28,"url":29,"color":30,"parentId":12,"count":12,"imageUrl":12,"parent":12,"order":11,"translations":54},[55,56,57],{"lang":33,"name":27,"description":34},{"lang":36,"name":27,"description":37},{"lang":39,"name":27,"description":40},[59,61,64],{"lang":33,"name":49,"description":60},"Bénéficiez des conseils de nos experts sur la conformité RGPD, la protection des données et les enjeux privacy. Articles de fond, analyses et retours d’expérience métier.",{"lang":39,"name":62,"description":63},"Fachwissen","Entdecken Sie die Artikel unserer DSGVO-Experten",{"lang":36,"name":65,"description":66},"Experiencia","Descubre los artículos de nuestros expertos en Privacy",[],"https://static.dastra.eu/content/76cc5e10-554e-46d4-bed8-356397bac79d/visuel-article-32-original.jpg",[70,71,72,73,74,75,76],"https://static.dastra.eu/content/76cc5e10-554e-46d4-bed8-356397bac79d/visuel-article-32-1000.webp","https://static.dastra.eu/content/76cc5e10-554e-46d4-bed8-356397bac79d/visuel-article-32.webp","https://static.dastra.eu/content/76cc5e10-554e-46d4-bed8-356397bac79d/visuel-article-32-1500.webp","https://static.dastra.eu/content/76cc5e10-554e-46d4-bed8-356397bac79d/visuel-article-32-800.webp","https://static.dastra.eu/content/76cc5e10-554e-46d4-bed8-356397bac79d/visuel-article-32-600.webp","https://static.dastra.eu/content/76cc5e10-554e-46d4-bed8-356397bac79d/visuel-article-32-300.webp","https://static.dastra.eu/content/76cc5e10-554e-46d4-bed8-356397bac79d/visuel-article-32-100.webp",59541]