This article is more than 1 year old

Researcher hopes to teach infants with cochlear implants to speak – with an app

Kids who've never heard need 'habilitation' – they've never had a skill to rehabilitate

Smashing pumpkins

Barnet elaborated on other ways child development interplays with what the app and the AI need.

“When a child has not heard any sound, they don't understand that a noise has an effect on the environment. So the first thing has to be a visual reward for an articulation.”

At 12 months, she continued, children respond well to visual rewards – and even an “ahhh” or “ohhh” should get a response from the app, if (a big if even for machine learning) it's a deliberate articulation.

There "has to be a visual reward for an articulation"

So after distinguishing between speech and “the kid threw a bit pumpkin at the screen”, the app has to respond at a second stage, called “word approximation”. Here, the system's going to have to at once recognise that “da” might be an approximation for “daddy” (with reward), and support the child's development from approximation to whole words.

“That's quite difficult. That needs to be cross-matched with thousands of articulations from normally-speaking babies,” Barnet explained.

Sterling added another layer the system has to learn: “Is 'da' today the same 'da' as the same child said the other day?”

Swinburne's BabyLab will help here, by supporting the collection of speech samples the GetTalking team needs.

Those samples will help GetTalking respond to the word-approximation by re-articulating the correct word, “and show the baby a picture of what they're saying”.

AI not ready to replace people

As both Barnet and Sterling emphasised, it's impossible to replace the role of the speech therapist or parent.

“I've been working in AI research for 35 years,” Sterling said. “People have consistently overestimated what they expect.”

Rather than outright automation, Sterling says, most of the time what matters is to provide AI as an aid for people – “how to make a richer experience for people, to help people with their environment”.

In the case of GetTalking, one thing he reckons the AI behind the app will do well is do a better job of diagnosing whether or not the child is making progress.

“It's a co-design problem; you work with speech therapists, parents, kids – and see what works”, Sterling said.

GetTalking is in its early stages, with support from the National Acoustic Laboratories (which operates Hearing Australia). After the app development stages, GetTalking will need a clinical trial to demonstrate its effectiveness. Those aren't cheap, but Barnet said she hopes to secure federal funding at that point.

Since disadvantage is so strongly associated with holding back children who receive the implants, Barnet's hope is that GetTalking could be free to those who need it.

The full team is Swinburne's associate professor Rachael McDonald; Dr Belinda Barnet; professor Leon Sterling; associate professor Jordy Kaufman; associate professor Simone Taffe and Dr Carolyn Barnes; and National Acoustic Laboratories' Dr Teresa Ching and Dr Laura Button. ®

More about

TIP US OFF

Send us news


Other stories you might like