AI and Privacy in Europe and Switzerland: The Impact of the EU AI Act and the Rise of Accountability
As artificial intelligence becomes deeply woven into modern life, regulators are racing to ensure its development respects fundamental rights — particularly the right to privacy. In 2024, the European Union took a significant step with the formal adoption of the AI Act, the world’s first comprehensive legal framework specifically governing artificial intelligence. Though Switzerland is not an EU member, the ripple effects of this legislation are already reaching across borders, raising crucial questions for Swiss lawmakers, companies, and data protection authorities.
At the heart of this legal evolution lies a renewed focus on accountability — the idea that organizations deploying AI systems must take proactive responsibility for their ethical and legal compliance. This principle, long embedded in the GDPR and echoed in Switzerland’s revamped Federal Act on Data Protection (FADP), is now gaining new dimensions in the context of AI.
The EU AI Act introduces a tiered risk-based approach, classifying AI systems as unacceptable, high-risk, limited-risk, or minimal-risk, with corresponding legal obligations. Particularly for high-risk systems, which often process large amounts of personal data, the law mandates risk assessments, data governance frameworks, and human oversight — all directly tied to privacy protection and individual rights.
Switzerland’s updated FADP, in force since September 2023, similarly emphasizes transparency, privacy by design and by default, and the need to assess risks when processing sensitive personal data. While the Swiss law doesn’t yet include AI-specific provisions, the underlying data protection principles align closely with those in the EU. In practice, Swiss organizations developing or deploying AI — especially those operating in EU markets — will need to adapt to the stricter EU standards, particularly around documentation, explainability, and human intervention in automated decision-making.
One area of convergence is the use of Data Protection Impact Assessments (DPIAs). Already required under both the GDPR and the FADP for high-risk data processing, DPIAs will now serve a dual purpose in the AI context: as tools for compliance and as evidence of responsible AI governance. This shift reflects a broader transformation in the regulatory landscape, where compliance is no longer a reactive checkbox, but a proactive, documented, and auditable process — the essence of modern accountability.
Another shared challenge is ensuring transparency in automated decision-making. Both European and Swiss laws grant individuals the right to understand when and how automated systems impact them. Under the AI Act, transparency obligations extend further, requiring clear disclosure when individuals interact with AI, as well as technical documentation of system design, data sources, and limitations.
For Switzerland, the AI Act is not just a foreign law to observe — it is a benchmark likely to influence future domestic regulation. In fact, discussions are already underway in Swiss legal and political circles about how to approach AI-specific legislation, and whether to align more closely with the EU model to maintain cross-border data trust and regulatory equivalence.
In this evolving landscape, one thing is clear: the era of informal, opaque AI development is over. Whether in Brussels or Bern, organizations will be expected to design AI systems with privacy and accountability built in — not just for compliance, but as a foundation of ethical digital innovation.