At the same time, she stresses that AI-led healthcare is not a panacea. While broadly supportive, the AoRMC report identified 12 aspects of AI in healthcare that required further discussion, including patient safety, accountability for decisions, information governance and the need for safeguards to prevent existing inequalities in society being learned and propagated by machines. It also made seven recommendations for policy-makers and service providers, including the need for more clinicians 'who are as well versed in data science as they are in medicine.'
Work in progress
With most AI tools still in the trial and testing phases, Farzana believes it is the right time to consider a regulatory framework. 'I think there needs to be a lot of careful thought about what we use AI for and how we use it', she reflects.
This work is still in its early stages. For example, last year the government published a Code of Conduct for IT companies setting out the principles for developing data-driven technology for the NHS, which covered areas such as data security and the need to establish an evidence base. More recently, a joint unit called NHSX has been set up to oversee the implementation of new technology and set national policy and standards. And there are already strong data protection laws governing the use of sensitive personal data for research which would apply to AI development because machines 'learn' by processing vast numbers of health records.
One of the biggest regulatory challenges for AI applications is the opaqueness of deep learning models (referred to as 'black box') which makes it much harder to understand. As Farzana puts it, 'Deep learning networks evolve which means the algorithm used on a Monday might not be the same algorithm by Friday.'
AI tools that help do things at scale will have a hugely important role in improving global access to healthcare.
Accountability and ethics
Without a clear understanding of the decision-making process it would be difficult to assign accountability for errors – would the fault lie with the designer of the algorithm or the clinician who acted on the results? Ambiguity on this point could compromise the doctor-patient relationship and undermine public trust.
'It's a very difficult to say on balance where liability should fall because each tool could be very, very different,' Farzana says. 'There would need to be an understanding of what the tool did, how it was supposed to be used and any education doctors were to have.' She recognises that AI companies will be hugely reluctant to accept liability for the results which may deter them from getting involved, but equally, doctors will bridle at the idea of being held liable for something that might be completely outside their control.
The best model she has seen is one where healthcare organisations, manufacturers and doctors agree to share liability. While the Law Commission is currently looking at the question of liability in relation to self-driving cars, the question of liability in healthcare settings is yet to be addressed. As the AoRMC report concludes, 'there is too much uncertainty about accountability, responsibility and the wider legal implications of the use of this technology'.
There will also be a need for new ethical standards for doctors to cover their use of emerging technologies although Farzana believes the GMC is considering this. 'It's a very complex area so guidance is definitely needed, but at the same time it's really important that these things are thought through.'
Profiles of the future?
Overall, Farzana is optimistic about the potential of AI to make a positive difference to an NHS under pressure from staff shortages and growing patient demand, but she thinks we should be realistic about how it will shape the profile of healthcare in the future. 'I think it's about making sure the solutions being developed are addressing our most pressing problems. Some of that may be diagnostic but often there are tools that can help streamline workflow or data collection.
'Bear in mind that before a patient is diagnosed, they have to go through a series of other steps, from being registered with a GP to getting a hospital appointment. We may find there are machine learning tools that can help with some of those areas and wouldn't necessarily have the same risks that diagnostic AI would have.'
She concludes on a hopeful note. 'I don't think that AI will replace the important role of clinicians in patient care. It will change how we work, but that's just part of what it is to be a doctor. The nature of clinical practice at the end of your medical career will always be different from when you qualified from medical school.'
Interview by Susan Field.