The recent rise of artificial intelligence is no longer limited to systems able to analyze or generate information. A new generation of tools, described as “agentic artificial intelligence”, goes a step further: these systems are capable of planning, making decisions and executing actions autonomously, by interacting with various services, databases and digital environments.
For example, in the context of a business trip, an agent can detect this trip in a calendar and proactively initiate bookings by interacting with third‑party services.
The agent does more than respond to a request: it can anticipate situations, detect changes in its environment and initiate actions itself to achieve the objective assigned to it.
This technological evolution opens significant opportunities for automating many organizational processes, including those involving processing of personal data.
However, this capacity for autonomous action profoundly changes the nature of risks to data protection. Unlike traditional AI systems, agents can simultaneously access multiple sources of information, retain persistent memories and perform automated actions, which complicates the traceability of processing activities and control over personal data flows.
The Spanish Data Protection Agency (AEPD) published, in February 2026, a guide dedicated to agentic AI, so that the integration of these systems into organizations is not seen as a mere technological tool but as a transformation of data processing workflows requiring strengthened governance.
Agentic artificial intelligence and personal data protection
Agentic AI directly impacts how personal information is collected, used and monitored:
- Access to unstructured data: agents can autonomously access emails, meeting minutes, internal documents or customer databases to enrich their context and make more relevant decisions. This level of access introduces a significant risk of violating the data minimization principle (Art. 5(1) GDPR).
For example, in a system of five agents tasked with finding hotels, it would be technically possible for the AI to consult irrelevant information (such as internal customer preferences or unrelated exchanges) simply to improve its context.
This situation makes it difficult to demonstrate to a supervisory authority that only the data strictly necessary were used.
- Automated decisions: increased autonomy can lead to decisions, without human intervention, that have legal or similarly significant effects on individuals (Art. 22 GDPR). The risk lies in the difficulty of controlling and demonstrating that these decisions do not have an adverse or discriminatory impact, especially when multiple agents collaborate and interact with numerous data sources.
For example, in an automated recruitment process, an agent could evaluate applications and automatically reject certain profiles based on criteria analyzed across different internal systems (CVs, tests, interview notes) and external sources (social networks, public recommendations), thereby disadvantaging some candidates.
- Confidentiality and agent initiative: agent autonomy can generate specific risks to data confidentiality. Agents’ ability to act proactively, without constant human supervision, makes it difficult to anticipate, control and trace data exchanges. This exposes organizations to confidentiality breaches.
For example, an agent might deem that a third‑party service offers the ideal tool to process information and decide to automatically transfer internal company files to unknown external servers via unaudited APIs.
Agent memory: there is a risk of unintended retention and reuse of data. AI agents have multiple memories:
- Management memory: logs of the agent’s activity and actions.
- Working memory: semantic (information updates), episodic (event archive) and procedural (rules for executing tasks).
This functioning creates a specific risk in terms of personal data protection.
For example, if an agent receives a mission involving health data and its global memory retains that information, the agent may later reuse that data for a completely different task, without consent or any purpose related to the initial mission.
Faced with the specific risks introduced by agentic AI, several recommendations emerge from the guide: integrate these systems into information governance, anticipate biases and errors, limit data access, structure metadata, and compartmentalize agents’ memories.
These requirements call for tools able to structure, document and manage the processing of personal data.
How can a governance tool like DASTRA help address these challenges?
Several features of DASTRA can help respond to the issues raised:
Integration into data governance AI agents can be integrated into the record of processing activities, allowing documentation of their purposes, categories of data used, information sources and recipients. This mapping improves visibility over data flows generated by these systems.
Documentation and traceability of risks Data protection impact assessments (DPIAs) can be used to identify and document risks specific to agentic AI, such as decision‑making autonomy, persistent memory or interactions with third‑party services.
Access and data flow management By documenting roles, responsibilities and categories of data access, governance can formalize access policies for information processed by agents and identify sensitive points in data flows.
Data and metadata cataloging A structured data mapping makes it easier to identify the sources used by agents, the types of data processed and their sensitivity, which is essential when these systems interact with multiple information repositories.
Implementing protective measures Measures such as pseudonymization, purpose limitation or data compartmentalization can be documented, monitored and audited.
