The risks extend far beyond mere output errors
By now, everyone knows how careful you must be with the output of AI systems. Fortunately, the systems acknowledge that you’re right when you point out their (sometimes very fundamental) errors. AI presents both significant risks and extraordinary opportunities. The following are just a few areas of concern that deserve attention:
Human rights exposure: risks arising from intrusive data collection, pervasive surveillance capabilities, and algorithmic bias that can undermine fairness, dignity, and equal treatment.
Democratic resilience risks: threats to public trust and institutional legitimacy through large‑scale misinformation, manipulation of public opinion, and vulnerabilities affecting electoral integrity.
Youth development and well‑being: impacts on digital wellness, cognitive development, and critical‑thinking skills, including over‑reliance on automated tools in learning environments.
Labor market and workforce disruption: structural shifts caused by automation, task displacement, and changing skill requirements, with implications for employment stability and social cohesion.
Environmental sustainability risks: the energy intensity and water consumption of data centers and device ecosystems, contributing to environmental strain and complicating sustainability commitments.
Safety and security vulnerabilities: new vectors for cyberattacks, model manipulation, and misuse of AI systems that can compromise organizational resilience and national security.
Market concentration and economic power imbalance: risks stemming from the dominance of a few global AI providers, potentially limiting competition, local innovation, and national digital sovereignty.
AI has become an integral part of our society and will have a major impact on our future. This makes it all the more important to fully acknowledge the associated risks: let’s look them straight in the eye.
Karel Frielink
(attorney / legal scholar)
(12 April 2026)
.

