Artificial intelligence has the potential to transform aspects of clinical practice but we must think carefully about its adoption and regulation, says Dr Farzana Rahman.

Technology enthusiasts believe we are on the verge of a new era in healthcare, driven by the power of intelligent, data-crunching machines. Artificial intelligence (AI) used to be the stuff of science fiction novels and futuristic films, where it was often portrayed as a sinister development. But times have changed. The speed with which virtual assistants and other smart applications have become a fixture in our homes suggests that many of us are ready to embrace machines that can learn like humans and perform useful tasks.

It remains to be seen whether this tolerance will extend to the adoption of AI in hospitals and general practices. There is no doubt that the technology will have a significant impact on the delivery of care in the coming years, with the government recently announcing the investment of £250 million to set up a new Artificial Intelligence Lab for the NHS. However, public acceptance is one of several practical, legal and ethical challenges that still confront policy-makers, health organisations, technology companies and clinicians.

Dr Farzana Rahman is ideally placed to consider these issues. As well as being a co-author of Artificial Intelligence in Healthcare, a report by the Academy of Medical Royal Colleges (AoMRC) focusing on the likely impact of AI for doctors and patients, Farzana is a consultant radiologist, a speciality which is at the centre of several diagnostic AI projects.

'There is a huge range of tools in development that are capable of diagnosing and risk stratifying medical images, although they are not yet in widespread clinical use,' she says. 'In the UK, for example, there is the Moorfields-DeepMind Health collaboration which uses ophthalmology data to develop an AI algorithm that can assess retinal OCT scans for signs of disease. And new AI tools are being demonstrated each year at the Radiological Society of North America Conference in Chicago.'

Other UK-based AI projects include chatbots such as 'Ask Oli', which has been designed to reassure and interact with young patients at Alder Hey Children's Hospital in Liverpool, and the Babylon GP at Hand service currently being trialled in the NHS. Meanwhile Great Ormond Street Hospital has been trialling the use of an AI-enabled bodysuit to learn more about movement decline in patients with Duchenne muscular dystrophy.


For Farzana, one of the great potential benefits of AI will be the opportunity to streamline the diagnostic process so over-stretched doctors have more time to spend with the patients who are most in need. 'There is an AI project in the US that focuses on the mental health of veterans,' she explains. 'Based on the data in medical records, they have created an algorithm that predicts those at highest statistical risk of suicide. And they found that by doing the job at scale, psychiatrists could spend more time with those who were most vulnerable, instead of doing risk assessments for everybody.'

Initiatives like this are a reminder that there is a global dimension to the development of AI tools in healthcare. 'The technology shouldn't just be seen within the microcosm of UK, Europe and US,' Farzana insists. 'There are huge inequalities when it comes to accessing healthcare resources around the world because a lot of the technology and skill is concentrated in a small number of countries. AI tools that help do things at scale will have a hugely important role in improving global access to healthcare.'

Photo credit: Alamy

At the same time, she stresses that AI-led healthcare is not a panacea. While broadly supportive, the AoRMC report identified 12 aspects of AI in healthcare that required further discussion, including patient safety, accountability for decisions, information governance and the need for safeguards to prevent existing inequalities in society being learned and propagated by machines. It also made seven recommendations for policy-makers and service providers, including the need for more clinicians 'who are as well versed in data science as they are in medicine.'

Work in progress

With most AI tools still in the trial and testing phases, Farzana believes it is the right time to consider a regulatory framework. 'I think there needs to be a lot of careful thought about what we use AI for and how we use it', she reflects.

This work is still in its early stages. For example, last year the government published a Code of Conduct for IT companies setting out the principles for developing data-driven technology for the NHS, which covered areas such as data security and the need to establish an evidence base. More recently, a joint unit called NHSX has been set up to oversee the implementation of new technology and set national policy and standards. And there are already strong data protection laws governing the use of sensitive personal data for research which would apply to AI development because machines 'learn' by processing vast numbers of health records.

One of the biggest regulatory challenges for AI applications is the opaqueness of deep learning models (referred to as 'black box') which makes it much harder to understand. As Farzana puts it, 'Deep learning networks evolve which means the algorithm used on a Monday might not be the same algorithm by Friday.'

AI tools that help do things at scale will have a hugely important role in improving global access to healthcare.

Accountability and ethics

Without a clear understanding of the decision-making process it would be difficult to assign accountability for errors  would the fault lie with the designer of the algorithm or the clinician who acted on the results? Ambiguity on this point could compromise the doctor-patient relationship and undermine public trust.

'It's a very difficult to say on balance where liability should fall because each tool could be very, very different,' Farzana says. 'There would need to be an understanding of what the tool did, how it was supposed to be used and any education doctors were to have.' She recognises that AI companies will be hugely reluctant to accept liability for the results which may deter them from getting involved, but equally, doctors will bridle at the idea of being held liable for something that might be completely outside their control.

The best model she has seen is one where healthcare organisations, manufacturers and doctors agree to share liability. While the Law Commission is currently looking at the question of liability in relation to self-driving cars, the question of liability in healthcare settings is yet to be addressed. As the AoRMC report concludes, 'there is too much uncertainty about accountability, responsibility and the wider legal implications of the use of this technology'.

There will also be a need for new ethical standards for doctors to cover their use of emerging technologies although Farzana believes the GMC is considering this. 'It's a very complex area so guidance is definitely needed, but at the same time it's really important that these things are thought through.'

Profiles of the future?

Overall, Farzana is optimistic about the potential of AI to make a positive difference to an NHS under pressure from staff shortages and growing patient demand, but she thinks we should be realistic about how it will shape the profile of healthcare in the future. 'I think it's about making sure the solutions being developed are addressing our most pressing problems. Some of that may be diagnostic but often there are tools that can help streamline workflow or data collection.

'Bear in mind that before a patient is diagnosed, they have to go through a series of other steps, from being registered with a GP to getting a hospital appointment. We may find there are machine learning tools that can help with some of those areas and wouldn't necessarily have the same risks that diagnostic AI would have.'

She concludes on a hopeful note. 'I don't think that AI will replace the important role of clinicians in patient care. It will change how we work, but that's just part of what it is to be a doctor. The nature of clinical practice at the end of your medical career will always be different from when you qualified from medical school.'

Interview by Susan Field.

This page was correct at publication on 07/01/2020. Any guidance is intended as general guidance for members only. If you are a member and need specific advice relating to your own circumstances, please contact one of our advisers.