The EU AI Act and other key AI regulations and guidelines

How to implement compliant and ethical AI

AI presents many new possibilities for companies and organisations, but, as with any powerful but still experimental technology, there are also a range of risks in using it.

Some of those risks are clear and specific, such as creating false information. Others are much harder to define, and seemingly fantastical, for example many leading experts in the AI sector genuinely worry that AI will become so intelligent that it will present a threat to humanity.

Many leading AI developers have themselves voiced serious concerns about the potential dangers of uncontrolled AI systems, and we have recently seen the first steps by governments towards regulating AI systems.

But where does that leave companies and organisations? How can they make sure they can use AI safely and responsibly?

What is clear is that if AI presents risks at a government and international level, then it also presents risks for companies and organisations using AI technologies. It makes sense for companies and organisations to put in place guidelines as soon as possible, so they can effectively manage risks now and in the future.

 In this article we cover:

  • What steps are being taken by the EU, the US, other countries and international bodies to regulate AI 
  • What are the main principles being adopted to regulate AI
  • How can companies and organisations apply those principles to develop their own practical guidelines for using AI

AI regulations - what we know so far

Just as big tech companies and AI developers scramble to release their new AI models ahead of their competitors, so Governments have also been racing recently to release plans for regulating AI. Despite the rush, they all hope to deliver a coherent and consistent approach for making sure the benefits of developing AI outweigh the possible harms.

This is of course an entirely new field, and one of the main risks raised is that we don't yet have a clear understanding of how AI can be effectively regulated. That means that just as the AI technologies are experimental, so are the plans to regulate those technologies.

Nevertheless, in late 2023, there have been three significant developments on international AI regulation.

AI Safety Summit

Firstly, the first international AI Safety Summit took place in November 2023, bringing together all the leading AI companies, and representatives from 27 countries, including China, the US and the EU.

The summit resulted in:

  • a Declaration which notes that AI risks are "best addressed through international cooperation".
  • Agreement on "building a shared scientific and evidence-based understanding of ....risks", collaborating appropriately, and "building respective risk-based policies...to ensure safety".
  • An announcement from the UK to create an AI Safety Institute​ for researching and testing advanced AI systems
  • Agreement on further summits every 6 months

US Executive Order

Just a few days before the AI Safety Summit, the US published an Executive Order on AI. The Executive Order includes:

  • Creating new safety and security standards for AI, including measures that require AI companies to share safety test results with the federal government
  • Protecting consumer privacy, by creating guidelines that agencies can use to evaluate privacy techniques used in AI
  • Helping to stop AI algorithms discriminate and creating best practices on the appropriate role of AI in the justice system
  • Creating a programme to evaluate potentially harmful AI-related healthcare practices and creating resources on how educators can responsibly use AI tools
  • Working with international partners to implement AI standards around the world.

The EU AI Act

In early December 2023, the EU reached provisional agreement on the first AI Act, which is likely to become law in the EU in late 2025. This Act provided more detail than either the AI Safety Summit or the US Executive Order.

Essentially the EU AI Act will apply a different risk category to different AI systems, based on their uses and the basic human rights that may be affected. The AI Act includes:

  • A risk-based approach:
  • Minimal risk:  this might include spam filters and low level chat bots - these systems will not require any further regulation
  • High risk: these are systems that could cause serious harm to people if they went wrong or were abused, for example those used in recruitment or healthcare. Systems classified as high risk would need to comply with as yet undefined requirement, but that might include rigorous independent testing, detailed activity logs and ensuring clear and comprehensive information for users.
  • Unacceptable risk: systems that could include social scoring or workplace emotion recognition systems. These will be banned outright
  • Specific transparency Risk: These are systems that could violate people's rights if the use of them is not fully disclosed - such as synthetic content (like deep fakes) or chat bots.
  • The Act will also introduce regulatory sandboxes, i.e. safe testing environments to facilitate responsible innovation and the development of compliant AI systems.
  • Failure to comply with the regulations could result in major fines, up to 7 percent of a company's global turnover.

Key regulation principles

All three of these initiatives look like uncertain first steps and they are frustratingly short on detail. But what we can say at this early stage is:

  • The debate on whether to regulate AI or not seems to be over; there is now a clear international consensus that AI should be regulated.
  • Concrete plans for regulation are likely to be implemented relatively soon, and that could affect not just AI developers, but anyone working with AI.
  • There is a strong emphasis on "foundation" models, the AI systems underlying all other AI tools, and also "frontier" AI, the most powerful foundation models that are most likely to lead to new capabilities - and therefore new risks.
  • Regulations are likely to focus on a "risk-based approach", that is identifying the particular harms AI systems can do - not only in general, but also specific AI systems - and regulation will be drafted to mitigate those risks.
  • There will be major public investment in how to test those systems, and then implementing common standards for testing.
  • Overall, countries as diverse as the EU, the US and China all currently seem to be more interested in collaborating on AI regulation, rather than competing, but that of course that could change.

The two key principles being applied that we can take from these developments are:

  1. A risk-based approach
  2. Comprehensive safety evaluation and testing

But what does that mean in practice? And can companies and organisations also find ways to apply those principles when they use AI in their own projects?

AI developers

For developers of AI, a risk-based approach is central to  the Responsible AI Licences (RAIL) project: https://www.licenses.ai/

RAIL is currently an advisory project that develops publicly available licences for AI developers to use, and also for those AI systems' end users. By using RAIL licences, AI developers are taking a positive step to introduce responsible AI principles in their work.

The RAIL initiative also enables AI developers to join a community of AI developers seeking to improve accountability and safety of all AI systems.

AI users

For AI end users, there are few initiatives or guidelines publicly available currently to help introduce responsible AI policies. But using AI without in some way trying to manage risks could cause harm - to your organisation, your staff and people you work with.

One possible way to follow a risk based approach is by using a simplified risk register.

Of course it's very hard to measure risks, so any measures will be highly subjective. But this may serve as a useful framework.

In this register we simply note the risks and then an assessment of risk and impact bsed on a range of very common use cases. We then add mitigation steps.

The purpose of a risk register is to update it regularly, so as you monitor and review your use, the risk register should become increasingly accurate, comprehensive, and therefore more effective.

Sample risk register

RiskProbability and ImpactRecommended mitigation steps
Poor quality of content/ Spreading Disinformation on social media and public channelsHigh risk if using AI to create contentImpact moderate, depending on useAll content produced through AI must be cross-checked for accuracy. Regular quality reviews and implement feedback systems with end users to maintain content quality. Ensure all content produced by AI is clearly labelled.
Making people more unhappy and insecure in their jobsHigh risk and impact if staff feel their work is being undermined or replacedConsultation with staff about where and how AI is being used.Clear internal communications on how AI should and shouldn’t be used
High environmental impactHigh probably - but overall carbon impact likely to be low to moderate compare to other activitiesWhere possible check the carbon intensity of the AI model used. Assess all potential uses for carbon impact - so spurious uses are discouraged. Ensure AI is used as efficiently as possible - e.g minimise prompts,
Even greater dependency in proprietary systemsDepends on the model used, but most will depend on a few LLMsPut in place alternative processes and tools to use. Ensure uses are spread between different tools and models.
Copyright violations and plagiarismHigh risk with image generation - possible risk with some text uses.Check all output images through a reverse image search, such as in Google.Use models that have robust data collection policies and are transparent on attribution in results. Have clear processes for creators to follow if they have concerns about your content.
Keeping business information secureModerate risk that some information may leak into training data -impact will depend on information sensitivityCheck data integrity policies of AI tool.Introduce robust policies for what data can be shared with AI tools, and what data cannot.
Personal PrivacyHigh risk if personal data is shared with AI toolsIntroduce strict controls on sensitive personal data shared with AI tools. Ensure all data shared with AI tools is anonymised - and potential harms with each use is carefully assessed beforehand. Conduct regular audits to ensure private data is not being compromised.
Unclear accountability for making decisionsHigh risk if AI is being used for business decisions.Ensure any use of AI for business processes is assessed for impact. Make sure a member of staff is personally responsible for each decision made. Put in place transparency practices explaining clearly how AI has been used.
Biased informationModerate risk with potentially high impact if people or groups feel harmed.Assess all possible uses for any possible bias with a check list for staff.
CrossPlaghttps://app.crossplag.com/
GPT-2 Output Detectorhttps://openai-openai-detector.hf.space/

A draft list of guidelines

From this we can start to see some basic rules that make sense to consider for anyone trying to use AI in their work. Of course for more specific uses, you may want to add more specific, possibly based on the risk register above, but this could be a useful start.

Check and keep track of the models you use

  • Look at the model's policies for data collection, responsible development, evaluation, and environmental impact
  • Ensure only approved models are used in your workplace

Put in place a checklist for every new use

  • Make sure every use is justified, clearly defined, risk assessed and recorded:
  • Description of the use case
  • Why is it necessary?
  • What model is being used?
  • Are there any potential risks?
  • How are they being managed?

Always have a member of staff  responsible 

  • Every use must have a person responsible and accountable for all outputs
  • Ensure that person is widely known to anyone impacted

Be clear and transparent with how and why you're using AI - internally and externally

  • Consult widely with your team about how and why you're using AI before deploying it
  • Make sure any concerns are heard before any decisions taken
  • Label any content or outputs as AI generated so no one is mislead

Make sure every AI output is assessed for accuracy, integrity and potential harms

  • Set up a system for monitoring and checking your use of AI . that everyone in your workplace fills out if and when  they use AI tools
  • Review all uses regularly - this may also enable you to use AI more effectively in the future

Ensure you have clear processes for people who have concerns about your AI-generated output

  • Add a contact point and process that is visible on external facing content - such as your website

Be especially careful when AI is being used with a person's data or can directly affect individuals

  • Any use of AI that uses personal data should be approached with extreme caution - the risks may well outweigh the benefits
  • Place special emphasis on where there could be biases, for example when considering results that could impact on specific groups

This is of course just a starting point, and we can be sure different approaches for AI guidelines will emerge in the future.

Conclusion

There is already a clear international consensus, among policy makers at least, that AI is too important a technology to leave unchecked. Many of the risks that are discussed in the media might seem far fetched, but there are also many risks that are all too real and presenting a threat now,

For any company or organisation who is using AI or planning to use AI, introducing guidelines and some form of risk management is an excellent way to ensure you are using this technology in a way that has a positive impact on you and everyone you work with. And by exploring these issues early, you may be far better placed to manage  regulatory requirements that may be placed on you in the future.

Further reading

Was this article helpful? yes no

Join us in the conversation on various social channels. We discuss the latest developments in technology as they happen!

THIS ARTICLE HAS BEEN REALISED WITH THE HELP OF
Bundesministerium für Wirtschaft und Klimaschutz
NextGenerationEU