View

INTELLIGENCE ON AI: WHERE ARE WE NOW?

Author: Sukanya Wadhwa

The regulatory landscape in the UK and how this could affect your business.

In a world where generative AI has become mainstream and accessible to the public, (Chat GPT attracted 1 million users in 5 days), it is important to remember that in many jurisdictions the technology is racing ahead, whilst the law is playing catch-up.

Individuals, companies, and countries around the world are grappling with how to harness (and be protected from) this type of technology.

Have the United Kingdom legislated?

The short answer: not much has been formalised here, yet.


The better answer: there’s movement in the right direction. The House of Lords’ AI Bill was introduced at the end of 2023 and is making progress through Parliament. The Bill introduces an AI Authority, with powers such as coordinating relevant legislation, assessing risks, and supporting AI innovators to introduce new technologies.

The principles-based Bill is promising in its aims. It requires tech companies to only use copyright material and data to train their AI models when they’ve received informed consent from the rightsholders. The developers must provide a record of all third-party data and IP to the AI Authority.


This is in line with our neighbours’ approach across the pond. The European Union AI Act requires developers to publish detailed summaries about the content used to train a model. Earlier this month, the US also saw the introduction of the Generative AI Copyright Disclosure Act bill. If passed, this would also impose similar obligations on US companies to report copyright-protected training material.


By comparison, the EU AI Act goes further than the current UK and US proposals: developers in the EU are also required to implement technologies to allow rightsholders to opt-out of their work being used for AI training.


As things stand, the UK AI Bill is at very early stages. It does not explain, for example, the consequences for tech companies that fail to get informed consent for copyright and data. It is also not clear whether the Bill goes far enough to protect creators, or whether the UK’s copyright regime will be more valuable to the writers, artists, musicians, filmmakers, and other creatives whose works are used to train AI.


UK’s copyright regime


AI models become “intelligent” through a process of deep learning, which means that the machine trains by connecting its artificial neurons, much like how humans learn. Generative AI goes one step further: the machine can itself create “new” content based on what it has learnt.


Much of the content input to train an AI model is copyright-protected work, so the law on copyright is integral to the regulation of AI.


The current exceptions in the UK Copyright Designs and Patents Act 1988 (“CDPA”) do not completely protect tech companies from claims of copyright infringement for training their AI models. The government formally ditched its plans to expand the text and data mining exception at section 29A of the CDPA, which means that commercial platforms (like the large-language-model tools that have become increasingly popular in the last year) are at risk of triggering litigation.


There is hope that cases such as Getty Images v Stability AI will set the record straight for eager onlookers (e.g. tech companies, creatives, potential claimants). In this case, the US-based media company has taken the makers of Stable Diffusion, a text-to-image tool, to court for copyright infringement for training its AI model on Getty’s images.


The case will decide important questions such as (i) whether the model has been trained in the UK such that the CDPA applies (particularly given the developer is based in the US), and (ii) whether making the software available on a website counts as infringement under the CDPA.


Where next?


There are regulatory changes and new cases in this area frequently and across the globe. In many cases, content-owners are taking disputes against AI developers to court. Recently, the British Phonographic Industry sent a legal notice to an AI company for allegedly using its copyright work to train deepfake technology (which creates music in the voices of renowned singers such as Drake and Amy Winehouse).


In other cases, however, rightsholders such as news and media outlets are opting to work with AI developers instead. Axel Springer, the publisher behind Business Insider and Politico, has partnered with OpenAI to allow ChatGPT use copyright work to generate answers to user prompts. Such deals are a good way to ensure that the publishers and rightsholders get their piece of the pie.


The most important task for any business now is to keep up to speed.


If you want to know more about AI and how you can protect or expand your business, get in touch with our experts today (info@brandsmiths.co.uk).

About Brandsmiths


Brandsmiths is the go-to firm for the world’s leading brands. With a highly skilled team of IP lawyers, the firm specialises in all matters related to trade marks, copyright, patents, rights & designs, confidential information, and database rights.

About the author

Sukanya Wadhwa is an IP litigator based in London. She advises clients on UK litigation proceedings for cases involving a range of intellectual property rights such as trade marks, copyright, design rights, and passing off. Sukanya has experience working on complex, multi-jurisdictional matters.

  • London

    Old Pump House
    19 Hooper Street
    London
    E1 8BU

    +44 (0) 203 709 8957

  • Manchester

    Floor 1
    31 Princess Street
    Manchester
    M2 4EW

    +44(0) 161 464 9237

  • Call Us

    +44 (0) 203 709 8957
    +44 (0) 161 464 9237

    Email
  • Useful Links

Brandsmiths is a trading name of Brandsmiths S.L. Limited which is authorised by the Solicitors Regulatory Authority, SRA No: 620298. Founding Partner: Adam Morallee

Privacy and Cookie Policy | Terms and Conditions | Complaint Procedure | Site by: Elate Global