HTN AI and Data: Peter Thomas, CCIO at Moorfields Eye Hospital, on building foundations for scaled AI implementation

At HTN AI and Data, we were joined by Peter Thomas, chief clinical information officer and director of digital medicine at Moorfields Eye Hospital, for a discussion on building the foundations of scaled AI implementation. Peter discussed the applications of AI in eye care, and considered how the broader healthcare system might have to change in order to ensure that implementations are successful and safe.

AI, machine learning, computer vision and computer natural language processing

To begin, Peter clarified the difference between the four key terms in this area.

  • AI: the ability of machines or computers to perform tasks that would otherwise require human intelligence
  • Machine learning (ML): a large domain of AI, allowing machines or computers to learn how to solve a task without being explicitly programmed
  • Computer vision (CV): the ability of computers to have a visual understanding of the world, and perform tasks we would normally associate with organic visual systems
  • Computer natural language processing (CNLP): the application of computational techniques to the analysis and synthesis of natural language and speech

From 03:43, Peter used a real-life example to illustrate the differences between them.

Early hopes

Peter shared a quotation from J. H. Mitchell in the British Medical Journal in 1969, to illustrate how long technology like this has been on the horizon: “Attempts are now being made to design computerised ‘total information systems’ to replace conventional paper records, and the possibility of automated diagnosis is being seriously discussed.”

Peter said: “It’s pretty accurate for where we are now – automated diagnosis is just beginning to come through, if not in huge amounts… against all of our excitement for implementing AI and digitising and automating healthcare, we have to be aware that these same beliefs and hopes have been in place for at least 50 years. So we need to take a good hard look at why we haven’t made more progress in that time.”

In 1977, Peter explained, “one of the really early, landmark pieces of work in artificial intelligence and computational techniques to making diagnoses came out of Stanford. It was called Mycin and it allowed you to input lots of facts about a patient into a computer and the output would be a diagnosis or perhaps a recommended treatment plan. It was expert at diagnosing and recommending treatments for different infectious diseases and it performed exceptionally well – so well, that when you put all the right data into it, it was more accurate than individual clinicians. The only way you could match it was by gathering clinicians together and coming to a consensus opinion.”


Although the technology was out-performing humans, it never actually made its way into clinical practice. “A paper was published in the 90s looking at the subsequent versions of Mycin, which failed to have a clinical impact. There were many different reasons for that – distrust is one of them, but the reasons can be practical too. If you’re a clinician in 1977, the chances are that the computer capable of running Mycin isn’t going to be sitting on your ward. If you need to a travel to a different building to access the computer, then input all of your patient data which you’ve got written down on paper, that’s not practical for a decision that is statistically only slightly better than the decision you would come up with independently.”

Mycin was a capable technology, doing what it was designed to do, said Peter; but it didn’t fit well into standard workflows and so became non-implementable. “That’s a theme we see with a lot of new AI technologies coming into clinical care.”

After having a look through PubMed and other databases, Peter shared the earliest example he could find of a study on AI in ophthalmology; a visual field analysis using artificial neural networks from 1993.

“Again, that has not led through to anything that is being used in regular clinics,” Peter said. “30 years of applying artificial intelligence to a problem that faces thousands of doctors everyday around the world hasn’t yet led to something that is better than the human ability to look at visual fields and interpret it, or simply look at plots on a graph over time.”

Peter moved on to share the Hype Cycle for Artificial Intelligence from Gartner in 2021, available to view in more detail at 14:35.

“This tracks the development towards maturity of different technologies with time, starting with the innovation trigger,” he explained. “Everyone gets excited and you get this peak of inflated expectation – but then you realise that it doesn’t meet those expectations and that leads to a trough of disillusion. Then you get the slope of enlightenment and the plateau of productivity, where technologies are refined and it becomes something useful.”


Next, Peter turned the discussion onto chatbots: their history, the opportunities, and his own experiences.

Noting the recent levels of interest in ChatGPT, he commented that the concept of a chatbot is nothing new; ELIZA, a basic chatbot based on the Rogerian approach psychotherapy, was developed in the 60s.

Peter described how he developed an interest in chatbots around five or six years ago. He built a prototype for an internal chatbot designed to try and automate answering patient questions against the most common queries coming in to the service. “It was capable of understanding the questions even if they were being asked in different ways, it had quite a lot of functionality that allowed you to go down branching decision trees and cover different eventualities. It didn’t get phrased when I tried some abrupt language – when it detected that the input was a bit angry, it tried to give a more understanding answer.

Moorfields released a public-facing chatbot as part of their new hospital programme, he said, as part of efforts to collect feelings from local residents and users. But implementing the AI in a real-life scenario proved different to testing scenarios.

“We found that people interacted with it in very different ways from our initial testers,” Peter shared. “To be honest, it probably wasn’t quite as capable or as effective as we thought it would be.”

AI and extracting clinical data

Looking ahead to future possibilities, Peter shared a fictional example of a letter about a patient requiring cataract surgery.

“One of the problems that many of us will face with using our electronic medical record systems is that clinical colleagues are often very good at writing free text and sometimes not as good at entering content as structured data,” he commented. “We all work in very busy clinics and it takes more time to fill out drop-down boxes and fields than it does to digitally dictate your clinical correspondence.”

Peter sought to experiment using ChatGPT, to see if the AI could extract clinical structured data from the free text example.

From 20:46, Peter showed how ChatGPT handled the experiment.

“Previously, with these technologies, they haven’t always been that useful in front of real world patients. But the technology has got to the point where ChatGPT, which wasn’t even designed for the function of reading clinical letters, is able to extract structured data accurately and tag it with the date.

“That’s what we need to bear in mind. A few years ago, the lack of progress in AI in healthcare could feel a bit frustrating. But now it really does seem like there are some far-reaching capabilities that are coming in.”

AI and children’s eye care

Demonstrating another possibility with AI technology, Peter shared a video of his baby son at 23:11. “One of the common clinical challenges we face in pediatric ophthalmology is working out what young children can see,” he said. “They can’t speak yet, so we can’t get them to read out a letter chart. We end up employing people to watch their visual behaviour – showing them cards on the left and the right and watching to see which they look at, and inferring whether they are able to see it or not.”

In the demonstration, the baby looks into webcam which is overlaying facial landmarking software, adapted by the Moorfields team for clinical purposes in collaboration with the William Gates Computing Lab. Peter explained how the algorithm constantly tracks the baby’s visual behaviours and provides data about his eyesight by showing him a cartoon which occasionally disappears and is replaced with a stimulus. If the baby moves his eyes to the stimulus quickly, then the cartoon can continue.

Peter noted that the algorithm was originally intended for use with mobile phones, so that mobiles could tell whether the user was looking at the screen or elsewhere. “Again, it illustrates that some of the impactful AIs that come into clinical care may not be ones that make immediate diagnoses for us, but rather ones that replace qualitative work with fast, quantitive alternatives.”

AI and macular disease diagnosis

Next Peter shared another example of a new technology use at Moorfields. He explained how his colleague was part of a team producing an AI algorithm in collaboration with Google, that was capable of diagnosing different macular diseases; a study was published in 2018.

“The algorithm is now being taken forwards towards something that could be used in clinic, out-performing even our best ophthalmologists in interpreting scans of the back of the eye,” he said. “That could potentially have a huge impact in automating care and providing expertise to optometrists in the community.”

Moorfields Reading and Clinical AI Centre

Peter’s next example focused on the work of Dr Konstantinos Balaskas, director of the Reading and Clinical AI Centre at Moorfields. Dr Balaskas and his team developed an algorithm in-house that can analyse the back of the eye to assist in the handling of geographic atrophy.

On the fact that the algorithm was developed internally, Peter commented on the democratisation of access to these tools. “Their capabilities become so great that together with academic collaborators we can develop these algorithms and perform in the same ballpark as those that are produced at great costs by very large organisations.”

More information on the Reading and Clinical AI Centre can be found here.

Implementation and scaling

“We’re definitely in the early part of our development at Moorfields,” Peter acknowledged, “as are other healthcare organisations. We implemented some AI algorithms under research and development wrappers, some that help us improve clinical triage of patients presenting at our emergency clinics. We have more implementations planned for the near future.”

Peter continued: “There has been a lot of thinking about how you would implement AI into hospitals at scale. One interesting idea that came out a couple of years ago highlighted how we probably don’t have a structure in hospitals that is capable of implementing and maintaining clinical AI. It’s sufficiently different in comparison to anything else we do in terms of its need for specific governance skills, technical skills, understanding and expertise in the area. We need new organisation.”

There is no formal part of the existing organisation that can take ownership of clinical AI, Peter pointed out.

When considering the concept of a clinical AI department within organisations, it’s possible that the concept is too narrow; AI solutions would support non-clinical processes and whole system change. “10 years ago, we looked at AI and thought it would help us with automating diagnosis and triage. I think we will actually see AI delivering benefit much sooner than that and it will help to run the hospital more smoothly, identifying patients who are likely not to attend appointments for example.”

The clinical AI department would require close interaction with three key informatic domains: traditional (with involvement from traditional roles such as the chief information officer); clinical (with involvement of the chief clinical information officer on the application of data and technology to improve clinical care); and research (bringing in new skills and a deeper understanding of the underlying theories of these technologies).

The traditional domain would provide the infrastructure needed to collect the data that feeds the AI, Peter explained, along with connecting it up, managing environments, and providing technical support and oversight.

The clinical domain would provide digital clinical safety governance for AI deployment, help to identify valid applications of AI, support the clinical pathway transformation, engage the clinical workforce, and translate requirements into behaviour. For anyone interested in exploring this area further, Peter recommended reading the Wachter Report and shared some of his own experiences in developing the team from 36:18.

The research domain would focus on providing in-house algorithms, supporting validation pre- and post-deployment, provide specialist skills, support specification and planning of datasets, develop insights, and identify opportunities.

“I think one of the really critical things to do is to appoint a chief research information officer, to provide the third angle of the triangle,” Peter said. “Some of our local trusts have done this and I’m keen to follow in their footsteps. The CRIO work with the CIO and CCIO to formalise each part of the domain and work as a trio to assist in complex implementation.”

Is a hospital the right place to implement AI?

Peter concluded his session by drawing attention to a broader question around whether hospitals are in fact the right place to implement AI in the first place.

“Hospitals are designed around traditional needs – face-to-fact appointments, inpatient stays, surgery, investigations,” he said. “An organisation built around providing those traditional models of care doesn’t necessarily look like the type of the organisation that is going to be really good at innovating and scaling new digital models of business.”

At 40:59, Peter shared a diagram illustrating the relationship between synchronous and asynchronous models of care, whether they happen on hospital premises or non-hospital premises. He pointed out that the areas of care in which AI might become involved “all look a bit different to a traditional hospital.”

He also shared the example of an organisation in the USA where the decision has been made to centralise all virtual care into a single hospital. “My personal feeling is that it is probably this kind of organisation that is going to be more successful at deploying AI,” Peter commented.

“I think the idea of how hospitals will look in the future tends to be a bit confused. There are two conceptual types of hospital that we can get from digital transformation; a smart hospital, which looks like a traditional hospital but is technologically advanced, and a digital hospital, which is an organisation that doesn’t need a physical location with patients in it, and which delivers care by virtual channels. The digital hospital is best understood as a telemedicine organisation. This setting, Peter suggested, is likely to be friendlier towards the deployment of AI due to factors such as greater focus on the IT team, which clinical tools are deployed, and data management.

“We’ve done some work with NHS England in north central London to develop a variation of that digital hospital concept called a telemedicine support unit,” Peter shared. “It’s nearing the end of the pilot now, we’ve triaged a hundred patients through it. What we are trying to build there is a model that can scale a lot, but can also start to embed the basic approach to collecting clinical-led data in a consistent way to ensure that it’s in a good place for the implementation of AI. As it’s not attached to a single specific hospital, it’s an organisation that could serve an entire ICS or region or even country.”

Ultimately, Peter concluded, the ‘best’ approach to implement AI in healthcare is as yet an unanswered question.

Many thanks to Peter for taking the time to join us and share his thoughts.