With AI products and integrations seeing no sign of letting up it is easy to see why product teams might want to jump on the bandwagon.
Imagine for a second a team wanting to make their product onboarding smoother. They add a simple AI chatbot – something powered by GPT that asks new users a few questions and gives them helpful tips. It might even set a few things up for them based on their answers. It works well and saves people time.
But even something this small could fall under the EU’s new AI Act (Most of it comes into force under ‘General purpose’ from this August 2025!)- and more product teams have no idea they need to do anything about it.
The European Union is pressing ahead with its Artificial Intelligence Act (AI Act)—the world’s first comprehensive legal framework for AI. Despite coordinated pressure from over 100 tech firms, the Commission remains unmoved. The Act is being rolled out in phases, with the first enforcement wave already under way.
What Is the AI Act?
It categorises AI into four risk tiers: unacceptable, high, limited, and minimal, with rules scaling accordingly:
- Unacceptable-risk systems (e.g. social‑scoring, subliminal manipulation, real‑time biometric ID) are banned from 2 February 2025 .
- High-risk systems (e.g. HR, healthcare, finance, law enforcement) must comply with stringent standards around safety, accuracy, human oversight, and undergo a Fundamental Rights Impact Assessment.
- Limited-risk systems (e.g. chatbots, deepfakes) require a clear disclosure: “You’re interacting with AI.”
- Minimal-risk systems (e.g. spam filters, recommendation engines) face minimal or no regulation, though a voluntary code is encouraged.
- General‑purpose AI models (foundation models, GPT‑style systems) are subject to transparency and copyright obligations from 2 August 2025, with stricter oversight for those posing systemic risk.
These staged deadlines mean:
- 2 February 2025 – bans on unacceptable practices and mandatory AI literacy training for staff. (And product teams??)
- 2 August 2025 – obligations for general‑purpose AI hit, including documentation, bias testing, copyright compliance, and energy reporting.
- 2 August 2026 – high‑risk systems enforcement across critical sectors.
Why Did Tech Companies Push Back?
Over 100 firms—including OpenAI, Google, Meta, Airbus, and ASML—called for a two‑year “stop‑the‑clock” ahead of the August 2025 roll‑out for general‑purpose AI . Their ask: to avoid a repeat of the GDPR scramble.
GDPR taught product teams painfully: compliance often arrives after deployment, triggering expensive retrofits, shifting timelines, and governance overhauls. AI firms fear a similar fate—but on steroids, given AI’s technical complexity and the centrality of foundation models. In essence: “Here comes GDPR‑2.0.”
That being said I cannot see preparation being perfect in every scenario – so hold your horses, we’re in for another joyous round of ongoing compliance checking and QA!
What This Means for Product Companies
- Compliance is a product requirement: Plan for documentation, audits, testing, and register AI systems with authorities.
- Explainability by default: Especially for chatbots and recommendation engines—design with transparency front and centre.
- AI literacy is non-negotiable: Every staff member interacting with AI must undergo training—mandatory from February 2025. Also bear in mind that ‘staff’ includes not just engineers or compliance officers but PMs, UX designers, devs, data scientists etc.
- Foundation model downstream obligations: Even if OpenAI or Anthropic handles model-level compliance, users and integrators still bear accountability for deployment and disclosures.
- Resource and timeline pressures: Meeting initially staggered deadlines may stretch teams—planning ahead avoids knock-on delays for product launches.
- Governance systems must mature: You’ll need architecture to manage risk tiers, auditing, human-in-the-loop oversight, incident logging, and compliance traceability.
What CPOs and Product Leaders Should Do Now
- Map your AI footprint – classify every use case by tier (unacceptable, high, limited, minimal, general-purpose).
- Start AI literacy programmes – embed training now, ahead of the February 2025 deadline.
- Build transparency processes – documentation, tech specs, training data summaries, energy usage.
- Engage with the Code of Practice – voluntary signing can reduce administrative burden and bring legal certainty .
- Formalise governance – bring in legal, compliance, data science, security and designate risk ownership.
- Track EU guidance – the AI Code of Practice lands in July and will inform compliance benchmarks .
- Test and simulate compliance – build in audits, adversarial testing, and incident response.
✅ AI Act Readiness Checklist for Product Leaders
1. Map Your AI Exposure
- Catalogue all products, features, and internal tools using AI.
- Classify each use case under the AI Act’s risk tiers: unacceptable, high-risk, limited-risk, minimal-risk, or general-purpose AI.
2. Assign Ownership
- Appoint a cross-functional AI compliance lead (Product, Legal, Data Science).
- Define accountability for documentation, risk assessment, and audit readiness.
3. Start AI Literacy Training
- Enrol all relevant teams in mandatory AI training ahead of the Feb 2025 deadline.
- Ensure teams understand key regulatory distinctions (e.g. transparency, bias, oversight).
4. Prepare Documentation Now
- Build an internal dossier for each AI model:
- Purpose
- Training data sources
- Evaluation metrics
- Human oversight mechanisms
- Third-party model dependencies
- Energy consumption (if applicable)
5. Establish Governance Structures
- Implement processes for:
- Model registration
- Risk classification
- Bias testing and mitigation
- Human-in-the-loop controls (where required)
- Incident logging and audit trails
6. Review All GPT/API Dependencies
- Identify where you’re using general-purpose models (e.g. GPT-4, Claude, Gemini).
- Document what you’re using them for and whether outputs may fall under high-risk definitions.
7. Build In Explainability
- Ensure user-facing AI features include clear disclosures.
- For decision-support systems (e.g. credit scoring, hiring, diagnosis), integrate model explanations where possible.
8. Engage with the EU Code of Practice
- Track the release of the voluntary Code of Practice (July 2025).
- Consider early adoption to reduce future enforcement burden.
9. Run a Pre-Mortem
- Simulate a compliance audit now.
- Identify weakest links in traceability, documentation, and oversight before enforcement hits.
10. Future-Proof New Development
- Bake AI compliance into your product development lifecycle.
- Add AI risk classification as a gate in product/feature reviews.
⚙️ Real-World Example: Using an LLM in Onboarding
💡 Scenario:
A SaaS product team integrates a large language model (e.g. OpenAI’s GPT-4 or Anthropic’s Claude) into their product’s user onboarding chatbot.
The bot guides users through setup, answers common questions, and provides tailored recommendations based on user responses.
🔍 What’s happening technically:
- The front-end chat UI captures user input.
- Backend routes questions to an API (e.g. OpenAI or Azure OpenAI), returns generated responses.
- Optionally, the system uses prior inputs to refine or personalise guidance.
It’s a limited-risk or general-purpose AI use – this may lead you to think that this is relatively harmless and devoid of any further action.
🛑 But under the AI Act…
Even this low-stakes use of AI could fall into scope. Why?
If the chatbot:
- Feeds personalised recommendations back into onboarding flow logic
- Collects sensitive data (e.g. job titles, business goals, sector info)
- Generates responses that shape user decisions or product configuration
…it may require compliance measures.
✅ What would compliance actually look like here?
1. Transparency to Users
You must clearly disclose that:
- They’re interacting with an AI system
- Responses are generated by a third-party model
- Output may be incomplete, inaccurate or biased
💬 Example copy:
“You’re chatting with an AI assistant powered by a language model. Please double-check important setup advice.”
2. Human Oversight Option
Offer an easy way to escalate to a human.
This is especially important if the bot is shaping decisions or configurations that impact account security or data access.
3. Logging & Traceability
Record:
- All inputs and outputs
- Model version used
- Time of interaction
This satisfies auditability and traceability expectations under the Act.
4. Risk Classification
Internally classify this use case as limited-risk AI, document:
- Purpose of use
- Potential for harmful outputs
- Why it does not fall under high-risk (e.g. it’s not making hiring, credit, or legal decisions)
5. Vendor Compliance
If using OpenAI or Anthropic:
- Document their compliance status under the Act
- Monitor whether their foundational model has been registered with the EU
- Add relevant compliance obligations into your vendor contract (e.g. incident notification)
Why this matters:
This isn’t about heavy-handed regulation. It’s about proactively reducing risk.
Seemingly “low-risk” AI features can:
- Drift into sensitive domains
- Be misinterpreted by users as authoritative
- Be challenged legally if they cause harm or bias
Bottom Line for Product Leaders:
If you’re using AI — even passively — to guide users, personalise experiences, or suggest actions, you’re already in scope.
The good news: small changes (disclosures, logs, escalation paths) often solve big compliance gaps.
Key takeaway – prepare!