EU to Soften Landmark AI Act Following Pressure from Big Tech

EU to Soften Landmark AI Act Following Pressure from Big Tech

The European Union's Artificial Intelligence Act, considered the world's first comprehensive AI law, is set to undergo significant changes as the EU moves to water down key provisions following pressure from Big Tech companies and the US government. The AI Act, which focuses on regulating high risk AI systems and addressing unacceptable risks, aims to establish a robust code of practice for AI development. This code ensures transparency, accountability, and fairness in AI systems, balancing innovation with safety and fundamental rights protection.

The European Commission plays a crucial role in AI governance, overseeing the implementation of the AI Act and providing guidance to AI providers and companies across Europe. The regulation has far-reaching implications for tech companies, AI providers, and medium sized enterprises, shaping the future of artificial intelligence in the EU market.


Key Takeaways

  1. The EU is set to water down landmark AI Act provisions after big tech pressure, reflecting a delicate balance between innovation and regulation.

  2. The AI Act introduces a general purpose AI code and transparency requirements, aligning with the EU's approach to managing systemic risks and protecting fundamental rights.

  3. The phased implementation timeline, including a nine months grace period for certain provisions, aims to support companies and ensure smooth enforcement of the final text.


Read next section


Background and Development of the EU AI Act

First proposed by the European Commission in April 2021, the Artificial Intelligence Act was designed to create a uniform regulatory framework for AI systems across the EU. The drafting process involved input from thirteen experts, independent experts, member states, and the scientific panel, ensuring a comprehensive and balanced approach to AI regulation.

The Act classifies AI applications into risk categories, including unacceptable risk, high risk applications, limited risk, and minimal risk. High risk AI systems, such as those used in critical infrastructure or healthcare, are subject to stricter rules and obligations, including transparency requirements and human oversight. The final version of the AI Act was adopted in June 2024, with a phased implementation timeline to ensure a smooth transition for companies, with some provisions coming into force in August and others scheduled for later.


Read next section


Role of the European Parliament in AI Law

The European Parliament has been instrumental in shaping the EU AI Act, emphasizing the protection of fundamental rights and addressing socio-economic status biases that AI systems may perpetuate. The Parliament has pushed for transparency and accountability in AI systems, especially for high risk applications, and has supported the introduction of regulatory sandboxes. These sandboxes enable start-ups and companies to test general purpose AI models in controlled environments, fostering innovation while ensuring compliance.

The European AI Office, established to support the enforcement and oversight of the AI Act, works closely with national authorities to ensure the regulation's effective application across member states.


Read next section


AI Regulation and Risk Management

The EU AI Act adopts a risk-based approach to AI governance, categorizing AI systems by their potential to cause harm. High risk systems face stringent compliance requirements, including risk management frameworks and transparency obligations. The European Commission provides guidance on these risk assessments, while national authorities enforce compliance and monitor AI providers.

Notably, the Act introduces transparency requirements for general purpose AI systems, including generative AI models, aligning with EU copyright law to ensure users are informed about AI-generated content. This is particularly relevant as concerns grow about systemic risks posed by advanced AI models and their impact on democratic processes and public spaces.


Read next section


Code of Practice for AI Development

A central pillar of the AI Act is the promotion of a code of practice for AI development. This code emphasizes transparency, accountability, and fairness, guiding the development and deployment of AI systems that are safe, reliable, and respectful of human rights. The code will be regularly reviewed and updated to keep pace with the rapidly evolving AI landscape.

The European AI Office will oversee the promotion and support of this code, collaborating with stakeholders including tech companies, AI providers, and civil society organizations.


Read next section


Impact of Big Tech on the AI Act

Big Tech companies have exerted considerable influence on the AI Act, with some arguing that the regulation is overly restrictive and could hamper innovation and market access. The Trump administration has also expressed concerns about the Act's potential impact on US tech companies, prompting diplomatic engagement.

In response, the European Commission and a commission spokesperson, Thomas Regnier, have emphasized the EU's commitment to a balanced regulatory framework that protects fundamental rights while encouraging innovation and competition. The AI Act aims to provide legal certainty and foster a competitive AI ecosystem in Europe, supporting start-ups and medium sized enterprises through measures such as regulatory sandboxes and phased compliance timelines.

The ongoing review and potential watering down of certain provisions reflect efforts to ease the compliance burden on companies and avoid disruption in the AI market ahead of full enforcement. However, the EU remains firm on its objectives, prioritizing safety, transparency, and accountability in AI governance.


Read next section


Conclusion

This development marks a critical moment in AI regulation, as Europe seeks to set global standards through the Artificial Intelligence Act while navigating geopolitical pressures and the evolving landscape of general purpose AI systems. The balance struck by the EU will likely influence AI governance worldwide in the years ahead.


Contact Cognativ



Read next section


BACK TO TOP