In the latest report in the HTN Health Tech Trends Series sponsored by InterSystems, we asked healthcare professionals about machine learning.
There were two striking details which emerged from this particular survey question; firstly, inline with our survey article from last week, where respondents overwhelmingly thought positively about the potential for AI in healthcare, 70% of respondents in this survey answered ‘no’ to machine learning being used in their organisation.
Secondly, this particular survey saw the greatest number of respondents ‘skipping’ the question, perhaps indicating an unfamiliarity with ‘machine learning’ or pointing out the question being redundant for that particular respondent’s organisation.
As mentioned in last week’s article, ‘artificial intelligence’ can be an ambiguous term and can be applied in varying contexts, with different methods, for different outcomes albeit the general consensus in healthcare cheering for AI to manage administrational tasks.
‘Machine learning’ proves to be an equally, if not a more ambiguous term to the latter, and there is logic to its increase in ambiguity.
Machine learning (ML) is in fact considered a subset of AI and as a result, is a method that allows AI to display its’ ‘intelligence’. Last week’s article talked briefly about complex algorithms within a system, and this, in a nutshell is what ML is.
ML should be viewed as a tool for prediction; it is not explicitly instructed to perform a specific function. Instead, it relies on patterns and inferences. The more interactions the more it appears to ‘learn’ or rather, make predictions based on previous data inputs.
The vast majority of us are now familiar with devices such as virtual personal assistants, and there are other common applications for ML too – email filtering for example. What is not so clear is the term ‘machine learning’ itself – something which our survey picked up on:
“not sure” said two respondents, “to some degree in research” claimed another, “might be but not in my area” and simply, “don’t know” was a further response.
There were however, some responses which showed application of ML techniques in certain organisations.
A clinical informatician wrote “ML is used as part of our in-house development – patient flow”, and a transformation officer stated “ML is currently in the pilot stage for estimating those patients who are medically fit for discharge”.
It seems from this that ML has begun to creep into some healthcare organisations, albeit in the early stages of development for predicting patient movement and improving efficiency.
One respondent wrote about their collaboration with Intel – Intel who have teamed up with Lumiata, a ‘healthcare AI Platform to Drive Affordability and Quality of Care’. The platform helps manage risk and cost with an aim to make healthcare more efficient.
Another respondent mentioned “ML within radiology” where machine learning aims to more accurately identify a condition than the radiologists it assists. A study conducted at Stanford proved that an ML algorithm could detect pneumonia with a better average metric than the healthcare professionals involved in the trial.
This may cause some to feel apprehensive about the total application of machine learning; in the past, programmed hardware replaced the human worker, whereas now, it may be that programmed software replaces the human worker.
Eric Topol, physician, scientist and author took to Twitter earlier this week posing the very question ‘Will Artificial Intelligence Replace Radiologists?’ and at this stage in its application seems very unlikely. Certainly, a response to his post suggested that machine learning and AI will act more of an assistant than the actual ‘clinician’ “radiologists who use AI will replace radiologists who don’t”.
Undoubtedly, there are many of us who are not ready to see the potential shift to ‘unshackled AI’ or rather AI which has full control over serious matters such as procedures in healthcare, but there does seem to be a creeping in of AI and ML into clinician assistant roles.
Eleonora Harwich and Kate Laycock in thinking on its own: AI in the NHS describe the current situation of the trustworthiness of AI with patients as being ‘not particularly high’ with only 47% of their sample willing to use an intelligent healthcare assistant via a smartphone, tablet or PC.
There is something in our existential identity that rejects placing trust in anything other than a human being. With Harwich and Laycock going on to say “attitudes change, however”.
Whilst ML and AI seems to be in the development and initial implementation stage in the NHS, large private companies such as Intel, as mentioned previously, and Google are researching and trialling AI projects in order to improve diagnosis and efficiencies of healthcare organisations.
Google Health DeepMind have agreements with NHS Organisations now including Royal Free London NHS Foundation Trust, Imperial College Healthcare NHS Trust, Taunton and Somerset NHS Foundation Trust, Moorfields Eye Hospital NHS Foundation Trust and University College London Hospitals NHS Foundation Trust.
One project commented “We joined forces with Google in 2014 to accelerate our work, while continuing to set our own research agenda. Our programs have learned to diagnose eye diseases as effectively as the world’s top doctors, to save 30% of the energy used to keep data centres cool, and to predict the complex 3D shapes of proteins – which could one day transform how drugs are invented.”
We asked Bryn Sage, chief executive at Inhealthcare how they are adopting Machine Learning techniques, Bryn said:
“We have embraced the machine learning capabilities of cloud-based computing through our long-standing partnership with Amazon Web Services. Such technologies used to be expensive both in terms of the licences and the skilled workforce needed. Powerful health and care applications have been made possible through this democratisation of machine learning. What used to be a specialised field has now become virtually mainstream to the extent that if healthcare technologists are not using machine learning now, they will get left behind.”
What are the use cases for Machine Learning in healthcare?
“Much of the hype around machine learning has surrounded its alleged ability to make better clinical decisions. A lot is made about claims that an AI robot can make better decisions than a clinician. This is definitely not an area where Inhealthcare uses machine learning. We use machine learning to help plan with capacity and improve efficiencies.”
“Through our digital pathway engine, NHS organisations are able to automate many of the processes and patient-healthcare profession interactions that would typically be paper-based. In doing this, large qualities of data are generated. We use machine learning against this process-level data in order to predict many useful insights. These help to predict where bottlenecks might occur and identify better ways of delivering care. One live example is in identifying patients who are at risk of breaching the 62-day cancer targets. This is clearly beneficial to both the patient and the NHS organisation. We are developing this as a service in partnership with a trust in the North of England to support patients and clinicians.”
What must be taken into consideration when adopting Machine Learning technology?
“There are obvious complexities in testing a system that is not deterministic. By this, we mean it is not guaranteed the same test conditions will always yield the same results. The rules that form the basis of clinical safety, around identifying and mitigating risks, become much less watertight as a result. Who accepts these risks and responsibilities, and ultimately who is to blame if things go wrong? These are difficult questions and explain why, at this time, Inhealthcare is focused on only using machine learning and artificial intelligence to support non-clinical decision making.”
Based on the growth pace of such technologies, we will launch our 2020 survey early next year and further benchmark machine learning techniques in healthcare.