The AI Act: What Does It Mean for Patenting Products

By Caroline Day, UK and European Patent Attorney, leading expert in protecting Artificial Intelligence (AI) at Haseltine Lake Kempner LLP

The EU is set to introduce its AI Act, likely at the end of this year. The AI Act aims to protect people from the risks associated with artificial intelligence while still allowing for innovation and development. As AI progresses rapidly, the EU is determined to stay at the forefront of its development.

Why are the new rules coming into play?

Whilst AI has many potential benefits, it does have a dark side. For example, it is easy to unintentionally programme bias into an AI product based on factors such as race or socioeconomic status.  In addition, the way AI systems use data has the potential to invade privacy. Because of this, the EU is trying to strike a balance between safeguarding people’s rights and interests and promoting technological progress.

Can I protect my AI innovation with a patent?

Due to how patent offices worldwide treat inventions implemented within computers, AI is a complex area to patent. But it is not impossible in all cases. With the right legal advice, you may be able to successfully protect your AI product from others copying, manufacturing, selling or importing your AI inventions without permission.

Currently, some people feel confident relying on trade secrets to protect their AI innovations, but for specific categories, that may become harder once the AI Act comes into force.

The Act will see some systems be prohibited entirely while others will face stringent rules and regulations (or conversely, the risk of large fines for non-compliance). Whatsmore, some of these rules may require innovators to disclose details about how their AI works – once this happens, patenting could no longer be an option in some cases, as once an invention is in the public domain, you can’t protect it with a patent.

What’s most important for AI innovators right now is understanding the potential changes that are coming in, the categories their product may fall into, and getting the right advice quickly. Getting the right advice early on can mean that you can protect your AI product with a patent (if it is appropriate to do so) before the rules apply.

What are the categories of AI systems and the requirements it places on providers?

The Act lists three categories of AI systems. The first one relates to systems associated with an ‘unacceptable risk’. It includes systems which seek to manipulate vulnerable persons, social scoring and the use of real-time biometric data, such as face recognition (with limited exceptions for law enforcement).  These systems are simply prohibited in the EU.

The second category is ‘high risk’. There are two main parts to this: systems which are key to safety and systems which could potentially be socially damaging, such as those systems where bias could be particularly harmful. For instance, AI systems associated with access to opportunities in life, such as education, employment, credit scores, and public services, fall into this category. The Act is intended to ensure that everyone is treated fairly and not subjected to prejudice or discrimination baked into an AI system.

The AI act introduces additional burdens in bringing such systems to market if they have an AI element.  For example, for a start, documentation must be kept, information on how the system works must be available to users, there must be an appropriate use of human oversight, and the list continues from there. If providers get this wrong, the financial consequences could be huge.

Caroline Day
Caroline Day

The third category includes ‘minimal risk’ and ‘low risk’. The AI act is not looking to over-regulate such systems, but where there is a risk of manipulation (for example, a person may think they are interacting with another person rather than a chatbot), then there are transparency obligations to make the true situation more apparent.

The key to the AI act is balance. It’s not about stifling innovation and the incredible potential of this technology by being unduly hard on the developers; it’s about addressing necessary safeguards to ensure public protection.

How will I know what category my AI product fits in?

Understanding what category AI fits into could be a challenge. To mitigate the various associated risks, the EU is encouraging everyone to act as if their systems are high risk, even if they suspect the true risk category could be lower. This means keeping good records and fulfilling transparency requirements. The EU believes that this could result in a de facto code of conduct for all AI that will become central to a developer’s strategy.

What should I look at in terms of my Intellectual Property (IP) strategy for AI?

Considering the challenge of IP strategy, it is essential to look at the potential disclosure obligations which your AI application could face. Securing an intellectual property right in relation to artificial intelligence requires specialist expertise, and the right lawyer can help you determine the best option for your application.

With the AI Act fast approaching, it’s essential to act quickly to understand whether your AI ideas and developments can remain protected from the public sphere.