The EU AI Act is upon us as of August, and companies are scrambling to come to grips with the various requirements scattered throughout the regulation in time for them to kick in. The first big deadline? The ban on prohibited AI practices on 2 February 2025.
The complexity of the Act lies in the fact that it addresses many different layers of compliance requirements, including at the AI use case, model, system, project, and enterprise levels, and these requirements, in turn, depend on what type of actor you are (a provider, a deployer, an importer, a distributor, etc.,) and, importantly, what type of AI you are proposing to use, which will determine the level of risk associated with that AI. The higher the risk, the stricter the rules that apply. Non-compliance with the Act is coupled with heavy fines, similar to those under GDPR, but with even higher amounts at stake.
Navigating these rules is not an easy, straightforward task, but the ISACA white paper, Understanding the EU AI Act: Requirements and Next Steps, does an excellent job distilling the broad main requirements and guiding the audience through the thicket of key obligations in a hands-on and understandable way. It explores the Act’s scope and risk categorization method in addition to providing a high-level overview of the requirements laid down in the Act, which are based on harmonized rules for the development, placement on the market, and the ethical and trustworthy use of AI systems in the European Union.
Because of these hefty requirements, there are two initial questions companies are well-advised to think about before acting. First, is the AI system/product needed? Just because everyone else is “doing AI” and spending enormous amounts on “the AI budget” – does it mean you have to? Avoid falling for Fear of Missing Out (FOMO)! Second, it is also desirable for the company to, at the outset, determine and think about the value that it believes the AI product/service may generate for them, how important this is to the business, and then put this in relation to the compliance requirements.
Importantly, the EU AI Act has a broader remit than simply the European Union. A company does not need to be established in the EU for the Act to apply, but any company that places an AI system on the EU market or whose AI systems are used within the EU is now required to comply with the Act, meaning it needs to disclose what data its AI is being trained on, ensure that its use is safe and ethical, and go through cumbersome risk assessments and other compliance requirements.
If you’ve decided the Act applies to you, keep in mind that AI is all about data—especially personal data—and you need to make sure you’re allowed to use that data to train your AI models, particularly in a GenAI context. There's no need to reinvent the wheel, either. Start by consulting your privacy officer, who’s likely familiar with many of the Act’s key requirements, like transparency, ethics, and accountability. By bringing them in early, you’ll get a head start on compliance. And since personal data will likely be part of your AI data, GDPR will apply, too, so your privacy officers will have to conduct similar and separate risk assessments anyway.
Finally, third parties are always the weakest link, even in AI. That’s why doing your KYS – Know Your Suppliers – is critical! Most companies that want to deploy or use a product or service with an AI component will not have developed the AI tools themselves and will instead depend on a third party (the provider). However, that doesn’t take away the responsibility from you, as the deployer, so checking that the provider is meeting the requirements under the Act with proper due diligence becomes key.
At the end of the white paper, there’s a helpful list of key points for companies starting their AI governance journey. One key takeaway: don’t start from scratch! Leverage what you already have in place by adapting existing policies, beefing up risk assessments, adding questions to vendor assessments, and improving your privacy and security by design frameworks.
The EU AI Act is here, but AI itself is rapidly evolving, and new laws and regulations are constantly in the works. On top of that, discussions about intellectual property, liability, and mass data collection, to mention a few, are ongoing. Developing and using AI in a compliant, ethical and trustworthy way is a constant process—there’s no “one and done.” Staying informed about AI, its use, and the potential consequences is key to maximizing its value while minimizing its risks.
Enjoy your reading!