Generative AI – What Are the Legal Issues?

By James Cassidy (partner) and Alastair Turnbull (solicitor), Information Law & Privacy team, Bevan Brittan

The news has been dominated with stories about the use and development of Artificial Intelligence (AI) over recent months.  In particular, ChatGPT from Open AI has created a large number of headlines and generative AI (‘Gen AI’) has become a new watchword for cutting edge technology. AI and Gen AI has the potential to disrupt every sector, and revolutionise ways of working across every facet of our lives.  It’s also already proven to be incredibly popular with over 1m people signing up to use Chat GPT in its first five days.  However, it also comes with a number of specific risks and considerations.

The pace of the development of AI far outstrips the legal, regulatory and ethical frameworks which need to be put in place to ensure that the benefits of AI are carefully considered.  For anyone looking at adopting or developing AI technologies, risk assessments should be conducted to identify and mitigate the impact on individuals.

What is Gen AI?

In broad strokes, Gen AI refers to a system which extrapolates information based on its parameters and data set. At its root it is a set of algorithms, trained on a dataset that allows it to consider and analyse new data.   In effect you can ask it a question and it will generate new content in order to answer.

Examples include:

  • ChatGPT, which was trained on a snapshot of parts of the Internet, and which provides responses to text prompts; and
  • Midjourney AI, which generates images based on text prompts.
  • Products which can “generatively fill” images
  • Systems which synthesise new music

Broadly, the purpose and function of a GenAI is limited by its complexity and the dataset it’s been trained on. It’s widely acknowledged that AI can benefit all sectors and could revolutionise the way we live our lives but at present, there is still a number of concerns about its use.

What are the risks?

Using Gen AI comes with a number of issues and risks, some of which are still developing.  The fact that the law does not develop at the same pace as the technology leaves a number of gaps and organisations will need to consider how to mitigate the risks of using AI until more a comprehensive framework around its ethical use is developed.   Legal risks include:

Data protection

Algorithms will often rely on large volumes of information in order to be able to learn.  Where this information contains personal data, the use of AI will need to be compliant with the UK GDPR.  The personal data used to train the model must have been lawfully processed and even if the personal data is available publically, this does not mean it can automatically be used for other purposes. Understanding how data is processed by AI can be a complex process and if risks are to be assessed, understanding how personal data is used and protected will be vital.

Training dataset quality

Considering the dataset used to teach the algorithm will potentially identify areas of risk.  For example, an AI designed to sift CVs and provide hiring recommendations might inherit any unconscious hiring biases from the underlying dataset of ‘successful applicant’ and ‘unsuccessful applicant’ CVs.   Not all algorithms are born equal and consideration should be given to the sophistication and development of any product before use given the potential impact on individuals.

IP rights

As Gen AI can create new content, who will own the intellectual property in any new work, media, image or music?  There may be IP issues if the Gen AI creator did not have sufficient rights to the information used in the training dataset and any contract should clearly set out IP ownership where possible.


Gen AI, and AI generally, has no inherent fact-checking facility. Concerns have already been raised that ChatGPT and other solutions make up, or hallucinate, facts, up to and including creating legal cases and precedent which sound legitimate but never happened.

Ethics and regulatory concerns

For some sectors such as healthcare, there will be a real question of the ethics of implementing AI.  What happens if it makes an error which causes an adverse incident for patient care? Does the responsibility stay with the AI creator, the organisation using it, or the individual who made a decision based on the AI’s response?  Liability issues will remain controversial and complex.

A look at the future

AI and GenAI is coming on in leaps and bounds, and it is clear that it is here to stay.  The next few years are going to be pivotal and disruptive as the law finds a way of responding to working in new ways using AI.

A policy paper from the Government was presented to parliament earlier this year aimed at developing a “pro-innovation approach to AI regulation”.  The paper talks about developing trustworthy AI and an approach to regulation where the pace of change at present is “unsettling”.  With the UK striving to be an AI superpower, it will be interesting to see how the regulatory framework develops to ensure that innovation can be supported whilst addressing and mitigating risks.

James Cassidy is a Partner in Bevan Brittan’s specialist Information Law & Privacy team, and has many years of experience advising on the data protection implications of using new technologies, with a particular focus in the healthcare sector. Alastair Turnbull is a Solicitor in the firm’s Information Law & Privacy team.