Skip to content

5 August 2025 | 5 min

New EU AI Requirements 2025: What Companies Must Know Now

The European Union is launching a new era of regulated artificial intelligence with the AI Act (Regulation 2024/1689). Starting August 2, 2025, binding rules will apply to both providers and users of General Purpose AI (GPAI) models. Organizations that build or integrate AI systems – particularly in generative AI, machine learning, natural language processing, or automated decision-making – must realign their governance, risk management, and compliance structures.

This does not only concern tech companies – the regulation affects all industries, including financial services, healthcare, manufacturing, retail, and public institutions.

  • Effective Date: August 2, 2025 – new legal obligations for GPAI models like ChatGPT, Claude, Gemini, Mistral, and LLaMA
  • New Obligations for Providers & Users:
    • Transparency on training data & model architecture
    • Risk analysis and conformity assessments
    • Copyright protection and documentation
  • Governance Requirements:
    • Internal responsibilities, audit trails, and supervisory structures
    • Mandatory employee training (AI Literacy)
  • Penalties for Violations: up to €35 million or 7% of global annual turnover
  • Voluntary “Code of Practice” for GPAI providers – reduces compliance risks

1. What Is the AI Act?

The AI Act is the world’s first comprehensive law regulating artificial intelligence. It classifies AI into four risk levels (prohibited, high-risk, limited-risk, minimal-risk) and assigns different obligations accordingly.

As of August 2, 2025, mandatory requirements apply for General Purpose AI (GPAI) – large AI models that can be used for a wide variety of applications such as text, image, code, speech, and data analysis.

Examples include:

  • ChatGPT (OpenAI / Microsoft)
  • Claude (Anthropic)
  • Gemini (Google DeepMind)
  • Mistral
  • LLaMA (Meta)

2. Who Is Affected?

The new rules apply to:

  • GPAI model developers and providers, regardless of whether they’re based in the EU
  • Distributors and vendors of GPAI-based software
  • Enterprise users of generative AI tools and services
  • Finetuners and companies that customize existing base models

Even companies using ChatGPT Enterprise, Microsoft Copilot, Gemini for Workspace, or other generative AI platforms must comply if AI is involved in decisions or data processing.

3. What Will Be Mandatory from August 2, 2025?

A) Transparency & Documentation

  • Disclosure of training data sources
  • Explanation of model architecture and intended use
  • Documentation for bias mitigation, safety, and fairness

B) Copyright & IP Protection

  • GPAI models must prevent copyright infringement and respect third-party rights

C) Risk Assessment & Governance

  • Formal AI risk assessments and impact analysis
  • Establishment of internal controls and AI governance frameworks
  • Ensure traceability and auditability of model outputs

D) Employee Training (AI Literacy)

  • Employees must understand, evaluate, and monitor AI systems

E) Supervision by Notified Bodies

  • Collaboration with Notified Bodies for inspections and certifications
  • Authorities may require access to models, documentation, and audits

4. What Are the Penalties?

Non-compliance may result in administrative fines of up to €35 million or 7% of annual global turnover, depending on severity.

5. The GPAI Code of Practice: Voluntary but Strategic

The European Commission has introduced a non-binding Code of Practice for GPAI developers. Companies who implement this code benefit from:

  • Regulatory relief and legal certainty
  • Reduced audit burdens
  • Stronger trust positioning in the EU market

6. Integrating AI into Risk Management & Compliance (GRC)

To meet the AI Act’s demands, organizations must:

  • Embed AI into their enterprise risk management (ERM) frameworks
  • Extend internal control systems (ICS) and ISMS policies to cover AI
  • Maintain governance documentation for AI roles, models, and tools
  • Track compliance obligations across jurisdictions
  • Enable cross-functional collaboration between legal, data science, and IT security teams

7. How This Relates to GRC

These AI obligations are not just regulatory formalities – they directly connect to Governance, Risk, and Compliance (GRC) principles. Here’s how:

  • Governance assigns responsibility for AI oversight and decision-making
  • Risk Management ensures companies identify, assess, and monitor AI risks (e.g. bias, model drift, data leakage)
  • Compliance ensures all legal and regulatory AI requirements are met and documented

A well-structured GRC platform enables companies to manage AI-related risks and controls alongside traditional areas such as ISO 27001, GDPR, and ESG. This leads to:

  • Centralized audit readiness
  • Consistent enterprise-wide documentation
  • Greater visibility into emerging risks
  • Stronger stakeholder trust

Conclusion

August 2, 2025 is not just another deadline – it marks the beginning of a new compliance era for artificial intelligence in Europe.

Whether you are building AI or simply integrating it into your workflows, the AI Act requires companies to demonstrate transparency, accountability, and responsible usage.

Those who act early, document thoroughly, and align with GRC frameworks will be better positioned to innovate with confidence, reduce legal exposure, and gain a long-term competitive edge.

Eu AI Act Info

FAQ – New EU AI Rules from August 2, 2025

What is General Purpose AI (GPAI)?
AI systems designed for broad, cross-domain use cases such as generating text, code, images, or speech. These include large foundation models like ChatGPT or Gemini.

Do the rules only affect tech companies?
No – all companies using AI tools operationally are impacted, particularly when AI influences decisions, data handling, or compliance-sensitive processes.

Is the Code of Practice mandatory?
No – it’s voluntary. But those who adopt it benefit from lower risk of sanctions and simplified compliance checks.

What are the financial penalties?
Up to €35 million or 7% of global annual revenue, depending on the type and severity of the violation.

How should companies prepare?

  1. Inventory and classify all AI systems
  2. Map model risks and use cases
  3. Integrate AI oversight into GRC programs
  4. Assign responsible officers for AI governance
  5. Provide AI literacy training across the company

Let me know if you’d like a condensed version for a newsletter, an infographic, or a press release based on this article.

Related posts