[co-authors: Giovanni Chieco, Noemi Canova, Arianna Porretti]
Artificial intelligence
EIOPA publishes opinion on AI governance and risk management
On 6 August 2025, EIOPA published an opinion aimed at clarifying how the interpretation of insurance regulations should be aligned with the provisions of the AI Act. The document doesn't introduce new rules but provides interpretative guidance that situates AI within the existing regulatory framework (Solvency II, IDD, DORA, GDPR).
Its objective is to provide proportional governance and risk management criteria capable of mitigating risks and protecting policyholders.
Scope of the opinion
AI is expected to play an increasingly pivotal role in the digital transformation affecting all economic sectors, including insurance.
In insurance, the use of AI systems spans the entire value chain: from underwriting and premium setting to claims management and fraud detection. The opportunities are clear: faster, automated settlements, more granular and accurate risk assessments, and sophisticated tools to detect fraudulent activities. But these benefits come with risks, mainly due to the limited transparency and opacity of some models, with potential systemic biases and discriminatory effects on policyholders.
With the entry into force of Regulation (EU) 2024/1689, known as the AI Act, the EU introduced a comprehensive and cross-sectoral approach to AI regulation, based on a risk-based logic. The regulation classifies AI systems according to their level of risk and, in the insurance sector, considers high-risk systems those used for risk assessment and pricing in life and health insurance. These cases remain outside the scope of EIOPA’s opinion and are subject to the strict governance and risk management obligations already required under the AI Act. Conversely, for systems that don't fall into these categories, the opinion indicates that insurance sector legislation is complemented by transparency obligations, the promotion of internal training, and the adoption of codes of conduct, fostering responsible and coordinated AI management.
It’s in this context that EIOPA’s opinion is framed, aiming to clarify the interpretation of sectoral legislation in the context of AI systems that were non-existent or not widely used when the legislation was adopted. As noted, the document doesn’t create new obligations but defines proportional, risk-based supervisory expectations, proposing a systemic approach adapted to the specificities of each use case. Particular emphasis is placed on ensuring European-level consistency, referencing the definition of “AI system” in the AI Act and in the European Commission’s AI Office guidelines, while leaving room for future interpretative clarifications. Even independently of the formal qualification as an “AI system,” insurance legislation already provides governance and control measures for using machine learning-based models.
AI governance and risk management
The topic of governance is embedded within an already complex regulatory framework (Solvency II, IDD, DORA), which converges on a key principle: governance and risk management systems must be effective, proportionate and commensurate with the complexity of the operations.
According to EIOPA, the first step is a thorough risk assessment of the adopted systems, as their impact is not uniform. Systems processing sensitive data or affecting decisions critical to clients require stronger controls, while others can be managed with simplified procedures.
The assessment should consider:
- the volume and sensitivity of the data processed
- the characteristics of the client base
- the system’s level of autonomy and its application (internal or consumer-facing)
- the effects on fundamental rights (non-discrimination and financial inclusion)
- prudential implications (operational continuity, solvency, reputation)
Based on this analysis, insurance undertakings are expected to adopt proportionate mitigation and management measures, ranging from human oversight to data governance, including tools to address model opacity. EIOPA’s approach is flexible: it doesn’t impose a single model but recommends interventions calibrated to actual risks.
In line with Solvency II, IDD, and DORA requirements, insurance undertakings must ensure responsible AI use by developing risk-based and proportionate governance and risk management systems, considering: fairness and ethics, data governance, documentation and record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. These principles shouldn’t be addressed in isolation but in an integrated and coherent manner, documented at the organisational level, and applied throughout the system’s lifecycle.
The opinion also emphasises the clear definition of roles and responsibilities. Insurance undertakings remain responsible for the systems used, even if developed by third parties. In such cases, suppliers must provide adequate assurances and – where limited by intellectual property constraints – complementary contractual solutions, SLAs, audits, or due diligence tests should be implemented.
EIOPA stresses a client-centric approach: acting honestly, fairly, and professionally involves fostering an ethical corporate culture, promoting staff training, adopting data governance policies to reduce biases and make outcomes understandable, regularly monitoring systems, and providing clear redress mechanisms.
Particular attention is given to data governance: data must be complete, accurate, adequate, and properly documented throughout the system lifecycle, including third-party data. Ultimate responsibility always lies with the insurance undertaking. Documentation should ensure traceability, and results must be explainable both to authorities (in a technical and global language) and to clients (in a clear and comprehensible manner).
Human oversight, accuracy, robustness, and cybersecurity
Human oversight must be ensured throughout the system lifecycle. Roles and responsibilities should be clearly defined, with appropriate escalation procedures, training programs and dedicated staff where needed.
Simultaneously, systems must ensure accuracy, robustness, and cybersecurity in line with Solvency II and DORA principles. They should be monitored via specific metrics, including fairness indicators, tested in interactions via APIs, and made resilient to external threats such as data poisoning or cyberattacks. Insurance undertakings must also maintain up-to-date ICT infrastructure and establish continuity plans to ensure the resilience of the AI ecosystem.
Conclusions
EIOPA’s opinion confirms that AI isn’t an external element to insurance regulation but a factor testing its adaptability. The challenge lies not merely in adding new rules but in reinterpreting existing ones in light of the evolving technological and regulatory context, while preserving client centrality and the prudential soundness of the sector.
Data Protection and Cybersecurity
Italian Data Protection Authority v CamHub: A warning for privacy protection in the era of domestic video surveillance
By Decision No. 573 of 1 October 2025, the Italian Data Protection Authority (Garante per la protezione dei dati personali) issued a formal warning against ICF Technology, Inc., operator of the website CamHub.com, due to serious deficiencies identified in the processing of personal data through the platform. The decision follows a prior investigation revealing that CamHub – a service offering video streaming of sexually explicit content – was involved in the unlawful dissemination of footage extracted from unsecured domestic video surveillance systems (IP cams) located in Italy.
Although the website is temporarily inaccessible, the page containing the terms and conditions of service and the related privacy notice remain available, allowing scrutiny of the nature and modalities of data processing. CamHub describes itself as an interactive service, both free and paid, enabling its content providers (performers) to upload publicly viewable multimedia content – including images, videos, sounds, and texts, including real-time images – that may be sexually explicit and viewed within specific chat rooms.
The investigation highlighted the existence of data processing activities inconsistent with applicable laws, particularly concerning the collection of images and videos from cameras installed in private locations in Italy. Such images, depicting private moments of individuals, qualify as personal data under the EU Regulation 2016/679 (GDPR), often falling within special categories of data (Articles 4(1) and 9 GDPR) that require enhanced protections. Collecting and disseminating these materials without the data subjects’ consent and without any lawful basis (Articles 6 and 9 GDPR) constitute a serious violation of the right to privacy and personal dignity.
The Authority noted the absence of appropriate technical and organisational measures adopted by CamHub to ensure compliance with core data protection principles such as data minimisation and security. These deficiencies failed to prevent the collection and distribution of sexually explicit images sourced from unsecured IP cams, freely accessible and installed in private premises in Italy (eg domestic surveillance cameras). CamHub was found in breach of Articles 5, 6, 9, 25, and 32 of the GDPR.
Given the particularly sensitive nature of the images involved – posing significant risks to the fundamental rights and freedoms of the individuals portrayed, especially regarding their privacy and dignity – the Authority exercised its powers under Article 58(2)(a) GDPR to issue a formal warning to ICF Technology, Inc. It emphasized that continuing to collect and disseminate personal data via CamHub, particularly from unsecured IP cams, may constitute a serious and sanctionable violation. The decision was adopted urgently due to the severity and urgency of the situation, with the Authority reserving the right to take further measures as necessary.
This decision underscores a fundamental principle: technical accessibility to data or images doesn’t automatically legitimize them being processed. There’s still an essential need for rigorous oversight and careful governance of personal data processing, especially regarding sensitive information and images that can profoundly impact individuals’ private lives. The protection of privacy and dignity is reaffirmed as a non-derogable principle, even in the context of emerging business models based on sharing multimedia content online. No mere appearance of technical accessibility or availability can justify unlawful processing of personal data, particularly when involving categories of data deserving enhanced protection.
Intellectual Property
Council of State rules on the reimbursability of generic drugs
On 19 September the Council of State ruled on a matter of particular importance in the pharmaceutical sector: the reimbursement of generic medicines by the national health system.
The dispute originated from a decision of the Italian Medicines Agency (AIFA, Decision No. 1344/2017 of 19 July 2017), through which the agency had included the generic version of Truvada – an antiretroviral used for the treatment of HIV infection – in reimbursement class H, establishing that the decision would take effect the day after the expiry of the originator’s patent on the active ingredient, as provided by law. Until then, the medicine had been classified in class C, reserved for drugs not yet evaluated for reimbursement purposes.
The holder of the generic medicine had already challenged the decision before the Administrative Regional Tribunal (TAR) of Lazio. They claimed that there was no valid protection on the originator, arguing that the patent had already expired and that the SPC was invalid because it related to active ingredients different from those claimed in the patent.
The Administrative Tribunal confirmed AIFA’s decision, noting that the agency was required to rely on official registers, in which the expiry date of Truvada’s SPC was indicated as 21 February 2020, and could not question the validity of the titles, deferring the reimbursement of the drug to a date after the expiry of the title.
At the same time, the company filed a separate action before the Court of Milan to have the supplementary protection certificate of the originator declared invalid.
By judgment No. 6062/2019 of 21 June 2019, the court upheld the company’s claims and declared the SPC null. This decision was subsequently confirmed by the Milan Court of Appeal with judgment No. 1310/2020 of 29 May 2020.
The pharmaceutical company then brought the matter before the Council of State, claiming economic damages for the 2018-2020 period, during which the generic drug remained classified as C(nn) so couldn’t be purchased by public healthcare facilities at the expense of the National Health Service. The company argued that it had suffered harm during the period in which the SPC, although null, hadn’t yet been judicially declared invalid.
The second-instance administrative judge, confirming the TAR’s decision, ruled that AIFA had no authority to assess the validity of the patent or the SPC and also rejected the claim for damages, reaffirming that AIFA had acted in full compliance with the law.
Technology, Media and Telecommunications
BEREC's publishes input to the European Commission’s consultation on the revision of the recommendation on relevant markets susceptible to ex ante regulation
On 30 September, BEREC published on its website its contribution to the public consultation launched by the European Commission on the revision of the Commission Recommendation on relevant product and service markets within the electronic communications sector susceptible to ex ante regulation (the RRM).
The purpose of the RRM is to identify markets for electronic communications products and services where ex ante regulation is justified, to benefit end users by making those markets effectively competitive. Under Article 64(1) of Directive (EU) 2018/1972 (ie the European Electronic Communications Code), the Commission has to review the RRM periodically.
The last review of the RRM was in December 2020, while the current review process was launched with the consultation initiated by the Commission on 17 June 2025 and concluded on 30 September. According to the timetable communicated by the Commission at the start of the consultation, the analysis of the contributions received should be underway, following which the Commission will proceed with the preparation of the draft of the new RRM.
As provided for in the European Electronic Communications Code, BEREC will issue an opinion which the Commission will take into account in defining the final version of the RRM.
BEREC's contribution consists of five sections: an introduction; a second section on the current situation of ex ante regulation in Europe; a third section on market development; a fourth section on the relevance of the RRM; and a concluding chapter.
BEREC's contribution takes stock of the current state of regulation in Europe and reaffirms the continuing usefulness of the RRM as a tool for harmonisation and flexibility available to national regulatory authorities (NRAs).
With regard to the current state of ex ante regulation in Europe, the second section clarifies that, based on currently available data, the markets already included in the RRM are still regulated, at least in part, in a significant number of member states. According to BEREC, this phenomenon is due to a characteristic common to these markets, namely their dependence on network elements that may be difficult for competitors to replicate. BEREC notes that where economic and technical barriers remain, ex ante regulation is necessary to safeguard effective competition by preventing potential abuses of market power by operators with significant market power (SMP).
In the third section, BEREC presents the main developments in the telecommunications sector that have a potential impact on the access markets to be regulated on the basis of SMP assessments and which should be included in the RRM.
With regard to general market trends, BEREC highlights the progress made in Europe in recent years:
- Fixed VHCN (Very High-Capacity Networks) coverage has grown from 50% to 82.5% in the EU. In terms of fibre coverage, EU member states have a relatively high share of Fibre-to-the-Premises (FTTP) coverage.
- As regards 5G, 94.3% of the EU population was covered by at least one 5G network in 2024.
In the fourth section, BEREC highlights the relevance of the RRM which, with the SMP Guidelines (ie “the Guidelines on market analysis and the assessment of significant market power under the EU regulatory framework for electronic communications networks and services”), ensure that NRAs define markets in line with the principles and practice established by European competition law and contribute to harmonisation in the application of European sectoral legislation.
In light of the above, BEREC emphasises that, after more than 20 years of ex ante regulation, the regime has successfully opened up electronic communications markets to effective competition. But not all markets in all member states have produced the same results, which shows that ex ante regulation is still necessary and will continue to be so in the foreseeable future.
On a similar topic, you may be interested in the article: “Public consultation by BEREC on Very High Capacity Networks.”
[View source.]