This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Featured Insights

| 1 minute read

Generative AI tools - can you afford to ignore them?

Anywhere you look at the moment, AI is producing something - a picture for your wall, a new song, a bit of code. And lots of legal issues are going to arise as a result.    

If you are developing AI systems, getting it wrong can be costly and devastating - as shown by last year's decision by the UK’s data protection watchdog ICO (Information Commissioner’s Office).  The ICO fined facial recognition company Clearview AI over £7.5m for using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition. The ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.  Very costly indeed, both from an immediate financial perspective and from a perspective of the future of the company.   

The legal issues are new and ever-changing.  So more than ever it's important to seek legal advice as early in the process as you can.  

How to move forward? Developing and refining a generative AI policy or guidelines is an iterative process. It requires input from a range of different stakeholders. It needs to (i) align with the organisation’s overall strategy for generative AI and(ii) be kept under review as the legal landscape for generative AI tools develops. Having burst onto the scene there is a lot to think about with generative AI. Engage with it and the opportunities are huge. And speak to Bird & Bird to help you understand the risks and develop and refine your policy and guidelines.