The ongoing progress of the Digital Omnibus legislative reform is beginning to show that while the EU is not seeking to radically rewrite the data management rulebook, the various selective adjustments proposed are likely to have a noticeable effect.
The legislative process has moved fast. On 5 February 2026, the European Parliament's IMCO-LIBE joint committees adopted their Draft Report on the AI Omnibus, shifting the debate away from Commission-led flexibility and toward fixed deadlines and clearer safeguards. In parallel, the EDPB/EDPS Joint Opinions of 20 January 2026 (AI Omnibus) and 11 February 2026 (GDPR elements), together with reported Council compromise texts, indicate institutional resistance where proposals touch the scope of rights or the architecture of accountability.
In our previous article, we outlined the Top 10 proposed reforms and their operational impact if adopted. This piece takes into account the progress to date and analyses the content of the EU Digital Omnibus to assess what is politically stable, what is contested, and what businesses should realistically prepare for.
The key question now is not what was proposed, but what is likely to survive, and how to plan for the changes that will eventually take place.
GDPR
1. Definition of personal data
Why it matters
The definition of personal data determines whether the GDPR applies at all.
What was proposed
The Commission proposed codifying a contextual identifiability test: information would not qualify as personal data for a specific controller if identification is not reasonably likely, taking into account legal access, technical capability, cost and incentives. The Commission would also gain power to clarify when pseudonymised data falls outside scope.
Legislative outlook: uncertain
This is one of the most politically sensitive proposals as it effectively rewrites the conditions on GDPR applicability. The EDPB and EDPS have both criticised the drafting, and the Council compromise texts suggest the proposed amendment may be removed.
What businesses should do
- Do not reclassify datasets or weaken safeguards, since the Omnibus is only a proposal.
- Continue treating pseudonymised data as in-scope for GDPR purposes.
- Document defensible contextual identifiability assessments to support the legal justification that certain datasets may fall outside the scope of personal data if the amendment is adopted.
2. AI development: legitimate interests legal basis and processing of sensitive data for bias prevention
Why it matters
The use of personal data for AI training requires a robust and flexible legal basis. The legitimate interests legal basis is the only realistic choice, but its limitations and strict conditions create potential legal uncertainty. Explicit confirmation of its applicability by the law itself would therefore be extremely valuable.
What was proposed
- Explicit confirmation that Article 6(1)(f) can support AI development and operation, subject to safeguards.
- Possible expansion under Article 9 for limited processing of special category data for bias detection.
Legislative outlook: relatively stable
- Article 6 clarification: limited controversy, but regulators question whether a specific AI provision is necessary and stress that existing legitimate interest safeguards (including the three-step applicability test) must remain unchanged.
- Article 9 expansion: more contested. The EDPB and EDPS accept that bias detection may require sensitive data but call for strict necessity and narrowly circumscribed use cases.
Sensitive data remains politically delicate, and any flexibility will likely be narrow and conditioned.
What businesses should do
- Prioritise this aspect in the relevant DPIA and draw up a robust Legitimate Interests Assessment.
- For AI developers, document the specific application of the legitimate interests legal basis and, where relevant, justify the necessity of using sensitive data for bias detection. Even if the amendment does not pass, this documentation will support the legal justification of the processing.
3. Refusal of abusive DSARs
Why it matters
DSARs increasingly create litigation exposure and operational strain, particularly in the context of employment and other disputes. In practice, employees and litigants increasingly use access requests to obtain information in parallel with, or ahead of, court proceedings, bypassing the procedural rules that normally govern evidence disclosure.
What was proposed
Extension of the ability to refuse requests that are manifestly unfounded or excessive, particularly where the right of access is exercised for purposes unrelated to privacy protection.
Legislative outlook: cautious
Regulators have emphasised that data subject rights remain foundational. The EDPB and EDPS caution against any reform that would weaken data subject rights but acknowledge that clarification of when requests may be considered “manifestly unfounded or excessive” could improve legal certainty. Early Council discussions point in a similar direction, suggesting that any reform will likely be narrow and clarify existing safeguards.
What businesses should do
- Continue applying the current “manifestly unfounded or excessive” test.
- Strengthen internal procedures for DSAR processing.
- Provide access to personal data without access to content of documents containing personal data, and prepare documented legal reasoning distinguishing personal data that must be disclosed under DSARs from documents that may only be disclosed through court evidence procedures, particularly in litigation contexts.
- Invest in workflow automation and data mapping.
4. Reduced privacy notice requirements
Why it matters
Repeated notices in low-risk contexts can create friction without materially improving transparency. In practice, organisations often provide multiple overlapping notices for the same processing activities, which can dilute rather than enhance understanding.
What was proposed
Clarification that notice obligations may not apply where the context of the processing is not data-intensive and there are reasonable grounds to assume that the individual already has the relevant information.
Legislative outlook: cautious
Transparency is a core GDPR principle. The EDPB and EDPS stress that any simplification must not weaken the obligation to provide meaningful information to individuals. Early Council discussions suggest openness to reducing redundant notices, but not to altering the core transparency framework.
What businesses should do
- Maintain Articles 13 and 14 GDPR compliance as the baseline.
- Reduce duplication by structuring notices more clearly (layered formats and just-in-time prompts).
- Ensure internal records align with published notices.
ePrivacy
5. Cookie consent regime and exemptions
Why it matters
Cookie consent compliance continues to create conversion loss, consent fatigue, regulatory fragmentation and uncertainty about what qualifies as “strictly necessary”, particularly for analytics and security tools.
Furthermore, the rules governing cookies currently sit in the ePrivacy Directive, separate from the GDPR, creating a fragmented framework where consent requirements for terminal equipment are interpreted and enforced differently across Member States.
What was proposed
Audience measurement and security-related cookies should fall within existing consent exemptions.
In addition, the Omnibus proposal also intended to bring rules on cookies into the GDPR framework including its one-stop-shop (with consent remaining the default legal basis).
Legislative outlook: uncertain
Tracking remains politically sensitive. Regulators continue to emphasise the central role of consent for tracking, particularly for advertising purposes. While the limited extension of the current exemption to include first-party analytics or security uses will be adopted, broader exemptions are unlikely.
Regulators appear particularly cautious about the proposal to move cookie rules into the GDPR. The EDPB, EDPS and early Council discussions suggest reluctance to integrate the cookie regime into the GDPR framework.
What businesses should do
- Maintain current consent frameworks.
- Document security and analytics justification.
- Avoid premature redesign of cookie banners.
Cybersecurity
6. Incident reporting alignment
Why it matters
Organisations subject to GDPR, NIS2, DORA and other regimes face multiple definitions, deadlines and reporting channels for security incidents.
What was proposed
- A single-entry notification point.
- Extension of the GDPR breach notification deadline from 72 to 96 hours.
Legislative outlook: relatively stable
Unlike reforms affecting substantive rights, alignment of incident reporting obligations has attracted limited institutional resistance. The objective is to reduce administrative fragmentation across EU cybersecurity and data protection frameworks.
What businesses should do
- Continue applying the GDPR 72-hour rule until amended.
- Work on internal harmonization of privacy and cybersecurity incident responses, including building unified incident documentation capable of populating multiple regulator reports.
- Conduct tabletop exercises.
Data Act
7. Trade secrets and refusal of data access
Why it matters
Data holders under the Data Act are subject to extensive disclosure obligations with only limited exemptions.
What was proposed
Clearer grounds to refuse disclosure where there is trade secret exposure or unlawful acquisition risk is high, together with clearer safeguards for protecting confidential business information.
Legislative outlook: stable
Unlike the GDPR and ePrivacy proposals, this issue has attracted limited institutional controversy. The EDPB and EDPS have not taken a specific position, as the reform concerns commercial confidentiality rather than data subject rights. Early Council discussions similarly show no structural opposition.
What businesses should do
- Maintain current Data Act compliance posture.
- Document trade secret and confidential information classification.
8. Cloud switching exemptions
Why it matters
Cloud service providers are subject to extensive cloud switching obligations.
What was proposed
Exemptions from cloud switching requirements for contracts concluded before 12 September 2025 involving data processing services that are custom-made or provided by SMEs and small mid-caps.
Legislative outlook: stable
No opposition appears to be challenging this proposed change, particularly given its relatively narrow scope.
What businesses should do
- Maintain current Data Act compliance posture unless cloud services contracts fall within the limited circumstances of the proposed exemptions.
AI Act
9. High-risk AI systems stop-the-clock
Why it matters
Under the current regulation, obligations for providers and deployers of high-risk AI systems will apply from 2 August 2026.
What was proposed
The AI Omnibus proposal introduces a conditional grace period for high-risk AI obligations. Instead of applying automatically from 2 August 2026, the rules would apply after the Commission confirms that key implementation measures (such as standards and guidance) are available. In any event, the proposal sets latest application dates of 2 December 2027 for some high-risk systems and 2 August 2028 for others, depending on their classification.
Legislative outlook: relatively stable
The debate around this proposal is largely about timing rather than substance. Policymakers are discussing whether high-risk obligations should apply only once key standards and guidance are available, and whether businesses need additional time to prepare. However, Council discussions have raised concerns that conditional timelines could create legal uncertainty. Overall, the negotiations are likely to focus on adjusting deadlines rather than revisiting the core obligations of the AI Act.
What businesses should do
- Do not pause AI Act compliance programmes, as the additional implementation time will not be so extensive to justify their suspension.
- Complete a thorough high-risk AI systems mapping.
- Assess the obligations applicable to their specific circumstances and role as provider or deployer.
10. AI governance and oversight through central supervision
Why it matters
As the AI Act moves toward implementation, enforcement architecture and organisational governance will determine how compliance is assessed in practice. The central role of the Commission’s AI Office will soon compete with national regulators newly appointed by Member States. Clarifying their coordination will determine the practical efficiency of the AI Act implementation.
What was proposed
The Omnibus proposal strengthens the role of the Commission’s AI Office as a central supervisory authority. The AI Office would be responsible for overseeing general-purpose AI models and certain AI systems deployed by very large online platforms and search engines, with powers to request documentation, supervise conformity assessments and impose penalties.
The proposal also reinforces organisational expectations around AI literacy and internal governance.
Legislative outlook: relatively stable
These changes focus on enforcement structure and coordination rather than altering substantive obligations. They are therefore less politically controversial, although Member States will be wary of losing too much competence.
What businesses should do
- Implement role-based training for developers, product owners, procurement, compliance and senior management.
- Document training attendance and updates.
- Embed AI literacy into governance frameworks.
- Prepare for coordinated supervisory scrutiny.
[View source.]