With the increasingly complex tasks AI purports to solve and an overstretched health system, it's understandable that some clinicians want to find ways to reduce their admin load. Responding to a patient complaint can be burdensome, especially with an ever-expanding to do list and it is often hard to know where to start.
As such, some doctors are turning to programs like ChatGPT to draft complaint responses for them. Below, we'll consider some risks involved in this approach.
AI and accuracy
The text generated by an AI system might sound plausible and eloquent and may even flow better than you could write yourself but therein lies its danger. Don't be seduced by the confident and articulate words used into believing the content is correct.
Even if you use AI to generate a broad outline for your response to a complaint, you will need to check the draft very carefully to avoid incorporating errors into the final version.
AI is known to 'hallucinate' - to generate text that purports to be factual yet has no factual basis. For example, there are reports of AI inaccurately citing legal precedents.
AI programmes may also use language or law from their country of origin (often the USA), rather than the UK. Examples include the use of the word 'plaintiff' rather than 'claimant', or saying 'contact our office' rather than 'practice' or 'surgery'.
Confidentiality
It goes without saying that including patient identifiable information in your prompt is inappropriate. That includes any information that could identify a patient when taken together. The medical history set out in a complaint response is likely to be unique to that patient.
The response to a complaint is often based on medical records that you cannot upload to an AI system. Even when data is anonymised, data protection legislation requires you to tell patients how their data is processed.
Think about how you would respond if a patient challenged you... Would you feel comfortable admitting you had used AI to draft the response?
Many AI systems are based overseas and log the prompts and content generated. There are already strong data protection laws governing the use of sensitive personal data for research and transferring outside of the UK/EU.
NICE has also produced guidance for the adoption of AI in healthcare, which includes an 11-point check list and other resources.
Vague phrases and false apologies
AI-written content can often be deliberately vague so it's broadly applicable to a wide range of situations. This can lead to phrasing that doesn't directly address the issues being complained about.
If the complainant has specified how they have been affected, an apology that reads "We are sorry for any distress this has caused you" might seem insincere and may inflame the situation. Apologies need to be specific and genuine to resonate with complainants - such as, "I was sorry to read about your long hospital stay and the pain you experienced during your recovery."
We would also recommend avoiding so-called 'false apologies'. This refers to apologies that say things like, "I am sorry you feel your care was poor," or, "I am sorry that we did not meet your expectations."
A complainant might think you are suggesting that the problem is with their unrealistic expectations or their perceptions if phrases like these are used.
The Parliamentary and Health Service Ombudsman's (PHSO's) Listening and Learning publication expands on this on page 11, under Getting it wrong.
Obvious AI content
While humans are occasionally bad at being able to detect what is written by a human versus AI, there have been cases where sections of text from a letter have been reproduced verbatim when the recipient has asked AI to draft a similar letter.
Think about how you would respond if a patient challenged you about this. Would you feel comfortable admitting you had used AI to draft the response? Would it indicate to the patient that you hadn't taken their complaint seriously?
Omitting information
Key aspects of a complaint response include things like an offer to meet, what organisational and individual reflection and learning has occurred, and the right to refer the matter to the PHSO.
None of these are likely to be included in an AI generated response unless it is specified in the prompt.
Reflection
Reflection on concerns raised is a necessary part of a complaint response and therefore outsourcing it to AI defeats the purpose.
Even if you ask AI to draft you a reflection on why you - for example - missed an appendicitis, it will provide you with a long list of all the reasons why a doctor may miss the diagnosis. This is no substitute for individualised reflection.
In summary
In the face of increased complaints and immense pressure on the health service, it's only natural to want to find ways to work smarter. AI may act as a prompt to get you started, but it is no substitute for the human touch when responding to complaints in a suitably authentic and reflective manner.
A letter mainly drafted by AI can undermine the authenticity of a genuine apology and reflection, and it's also not without risk to the clinician using it to write the response.
Dr Ellie Mein
Medico-legal adviser
Dr Ellie Mein
Medico-legal adviser
MB ChB MRCOphth GDL LLM
Ellie joined the MDU as a medico-legal adviser in 2013. Prior to this she worked as an ophthalmologist before completing her Graduate Diploma in Law in Birmingham.
See more by Dr Ellie Mein