Artificial intelligence promises to deliver revolutionary changes to our everyday lives. But how will that affect healthcare – and how can doctors prepare?

It's hoped that the introduction of AI into healthcare will lead to higher quality patient care, delivered more efficiently - but NHS organisations are increasingly grappling with the technical, legal and ethical questions that need to be answered before healthcare can really benefit.

Other organisations and individuals might also be considering how to leverage the impact and efficiency AI could bring to their work and to patients. There are many ways to get involved with this, and a good place to start is the NHS AI lab.

Implementing artificial intelligence

There are many factors around the dramatic increase in the use of AI, but the accessibility of state-of-the-art transformer models to professionals outside of IT and data science is a key one; anyone can implement AI solutions using available third-party applications in a matter of hours.

To be clear, we are not technical experts in the use or implementation of artificial intelligence itself. But from a practical and medico-legal standpoint, there are nonetheless several things you may want to consider before diving in.

Managing conflict with colleagues - course

If you want to get started with AI, the first thing to be clear about what it's going to do. Do you need a model that will classify and summarise text? Will it analyse streams of data for anomalies, or identify pathological images? Is it going to help make appointments, organise staff rotas or listen to patients and make a diagnosis?

Be as clear as you can be about what decision your AI is going to make with the data you give it. You can then consider if it's of benefit to you or patients for this to happen using AI.

The next question concerns the model of AI to be used. Are you going to adopt a fully tested technology that has already been developed, or are you going to provide data to train an adaptable model?

If you are adopting a system that's been developed by someone else, you may wish to ask them if it complies with the following British Standards guidelines:

Alternatively, are you an IT expert who can design your own system? Whichever choice you start with, you'll need to know how the model works - and ideally, what data it was trained on.

The downside of AI is the governance needed to ensure this new technology is implemented safely. Just like a medical test, AI systems have an evidence base; they need to show that the results are statistically sound. That means you need to understand the metrics the developers use to establish its effectiveness. And because the model can learn, you need to have a plan for how the system will be monitored.

If you are the end user, you might want to ask the developer to design a governance plan for you, but you would still need to evaluate their claims. This can be a detailed process and could feel overwhelming. Thankfully, NICE has produced guidance for the adoption of AI in healthcare, which includes an 11 point check list and other resources.

Data protection and discrimination

To discriminate against someone, directly or indirectly, on the basis of a protected characteristic would be a breach of the Equality Act 2010. Whether or not an end user is discriminated against is a question of fact based on their experience, not your intentions, and so because AI risks introducing biases that might not be immediately obvious, these systems must be tested and monitored for bias.

Developing and using AI is fundamentally about the use of data, so complying with the Data Protection Act 2018 is a key concern. The ICO has produced a data protection risk toolkit to assist organisations that are developing or adopting AI, and while it's not a legal requirement to complete the toolkit, it is a helpful tool to navigate a potentially complex area of law.

Copyright and ownership

If you're designing or training an AI system with documents written by others - for example, NICE guidance - you may be breaching their copyright. It's possible that future legislation will establish the process of 'knowledge mining' as always permissible, but at present, using other people's intellectual property in training data is likely to be a copyright infringement.

There is a potential exemption for non-commercial research in section 29A of the Copyright designs and patent Act 1988, but you should check with the author for any documents being used in a commercial product. Some third-party providers offer insurance for intellectual property infringements in use of their models, but only if you have adequate testing in place.

An AI system is not a legal entity and cannot be sued or convicted, which means any claim, regulatory action or prosecution would either fall against the developer, the user or both.

Guidance and regulations

At the time of writing, the GMC does not have specific guidance on the use of AI. However, many aspects of 'Good medical practice' (2024) are applicable to the adoption and use of AI technology, including providing a good standard of care, documenting decisions, treating patients fairly and protecting patients' data.

If the AI you create provides healthcare, you may need to be registered with the CQC. For established healthcare organisations, you may need to update the list of regulated activities you undertake, if they're expanded by the new service.

If an AI system is intended to make a diagnosis, prevent, monitor or treat a disease, investigate physiology, or control contraception, then it's likely it will meet the definition of a medical device in regulation 2 of The Medical Devices Regulations 2002.

There are some subtleties to this depending on the claims made by the developer about the intended purpose, but the MHRA has created a flowchart to help identify if software is a medical device. If it is a device, it must be UKCA-marked and comply with classification, design standards and post-market surveillance.

In order to provide AI services within the NHS in primary and secondary care in England, the organisation has a responsibility to comply with NHS Digital standard DCB160. This is an information standard published by the Secretary of State for Health under section 250 of the Health and Social Care Act 2012, and covers all clinical IT services, not just AI. This requires a clinical risk analysis and risk management system.

Unconscious bias - MDU course

Legal status

An AI system is not a legal entity and cannot be sued or convicted, which means any claim, regulatory action or prosecution would either fall against the developer, the user or both.

In practical terms, this means the user must be satisfied that the suggestions the AI provides are correct. You should check you have the appropriate indemnity in place to cover potential claims or other complaints arising from AI systems you adopt.

Different approaches to AI regulation are being developed across the world, and other jurisdictions may have significantly different rules on training and use of AI. Any processing of data in the EU - for example, in the Republic of Ireland - will need to comply with the EU AI Act in the near future.

AI has the potential to both offer and achieve great things for healthcare in the UK. But to do so, it needs effective governance and an understanding of the risks, as well as the benefits.

This page was correct at publication on 04/04/2024. Any guidance is intended as general guidance for members only. If you are a member and need specific advice relating to your own circumstances, please contact one of our advisers.