Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance
EU AI Act Enters Into Force
Regulatory Requirements Will Become Enforceable in PhasesThe world's first-ever binding regulation on artificial intelligence came into force on Thursday.
See Also: New OnDemand Webinar | Overcoming Top Data Compliance Challenges in an Era of Digital Modernization
Lawmakers from the European Parliament in March approved the regulation with 523 votes in favor. The EU Ai Act is the first-ever rule banning the use of high-risk artificial intelligence applications, such as AI-driven emotion recognition, in the workplace and schools (see: EU Parliament Approves the Artificial Intelligence Acts).
It also bans social scoring and the scraping of CCTV footage to create facial recognition databases. Any violations could cost companies up to 35 million euros or 7% of a corporate annual turnover.
The law's requirements are set to be enforced in a phased manner. The ban on high-risk AI systems is set to be actionable first, six months from Thursday.
"With the entry into force of the AI Act, European democracy has delivered an effective, proportionate and world-first framework for AI, tackling risks and serving as a launchpad for European AI startups," said EU Commissioner for Internal Market Thierry Breton.
ChatGPT maker OpenAI said it is "committed to complying with the Act, not only because this is a legal obligation, but also because the goal of the law aligns with our mission to develop and deploy safe AI." CEO Sam Altman last year said the company withdraw from the European market rather than comply with the act (see: OpenAI CEO Altman 'Blackmails' EU Over AI Regulation).
A newly constituted AI Office and the AI Board within the next nine months will release a code of practice for AI practitioners that requires risk identification and mitigation measures.
Rules affecting general-purpose AI will be effective within 12 months, and limitations and additional requirements imposed on high-risk AI systems in critical infrastructure will come into force two years after the adoption of the law.
In addition to the ban and limitation on the use of AI, the regulation also strives to promote transparency. Artificial intelligence companies must disclose details of copyrighted contents used to train their models and ensure privacy compliance under the General Data Protection Regulation.
For the cybersecurity industry, which relies on AI for automation, threat detection and pattern recognition, security firms should for now avoid a complete overhaul of their operations, said Julian Mulhare, EMEA managing director at Searce, an AI cloud analytics firms.
Pieter Arntz, senior threat researcher at security firm Malwarebytes, said having a binding regulation on a fast-evolving technology such as AI can be "hard to interpret."
"Many of the guidelines are based on old-fashioned product safety regulations which are hard to translate into regulations," Arntz said. "More general-purpose large language models are a lot harder to classify, let alone the open-source models that can be adapted by users to fit their own purposes."
Privacy rights activists criticized amendments made to the law that permit AI companies to self-evaluate risks. They argued that companies can deliberately categorize themselves as low-risk to bypass compliance measures.
Current voluntary assessment requirements such as those in the EU AI Act are inadequate to determine the risk of AI systems, researchers at Ada Lovelace Institute said, adding that such assessment should be combined with incident reporting and postmarket monitoring to effectively track AI risks.