As the senior population around the world grows, dementia — one of the most common aging-related conditions — will force healthcare systems to adapt. Already on the case are Frank Rudzicz, a computer scientist at the University of Toronto and the Toronto Rehabilitation Institute, and his two-foot-tall robotic boy Ludwig, which is powered by machine learning.
Long passionate about robotics, Rudzicz determined during college that he could have the biggest impact on the field and artificial intelligence if he focused on natural language processing. The field touches many areas of AI, from human interaction and learning to developing knowledge of the world.
Working on his doctoral project on speech recognition for people with cerebral palsy, Rudzicz learned what machine learning could accomplish in the clinical domain. So, he began looking for opportunities to apply his work in other healthcare scenarios, as well.
He jumped into the cause, assembling a team at the University of Toronto and Toronto Rehabilitation Institute that started by developing a sort of diagnostic software for dementia, using speech as an input.
“Language can provide a very deep and accurate lens as to the speaker’s cognitive emotional state, so we started there,” Rudzicz said. “But to be engaging, and to assist people when nurses or caregivers aren’t present, I wanted to develop something a bit more personal.”
What’s Important Is What’s Inside
So Rudzicz and his team decided to build a robot. The result of their efforts is Ludwig, which is being pilot tested at a long-term care facility in Toronto. While Ludwig isn’t the most impressive robot physically speaking, the machine learning algorithms running inside of him, which were trained on NVIDIA TITAN X GPUs, enable him to engage patients in conversation and analyse speech patterns to help assess each patient’s state.
“We are focusing almost entirely on the software, and have had some success in showing that breakdowns in communication, which are very common in dementia, can be identified by using speech input and neural network models,” Rudzicz said.
Patients have been signing up to participate in the pilot, and data collection is underway.
Advancing Human-Computer Interaction
Once the pilot is completed, Rudzicz said the next steps will be to further refine the software, and put GPUs to work again building a deep learning-powered neural network that would enable Ludwig to refocus conversations that get off track. Eventually, he has his eye on commercialising Ludwig, but he said that’s probably a few years away.
In the meantime, he’s zeroed in on establishing the software inside Ludwig as a driving force behind human-computer interaction going forward, particularly in the dementia-treatment realm.
“We hope that a key outcome of our current pilot will be a measure of how people with dementia feel about interacting with robots, which itself will guide our thinking going forward,” he said. “If the whole community can get interaction right, then products like Google Home and Amazon Echo will become much more entrenched.”