Home Products AI Helps a Stroke Patient Speak Again, a Milestone for Tech and Neuroscience

AI Helps a Stroke Patient Speak Again, a Milestone for Tech and Neuroscience

by Universalwellnesssystems

Anne Johnson’s wedding reception 20 years ago showcased her talent for speech. During the 15-minute enthusiastic toast, she joked that she ran down the aisle, wondered if she should have said “flatist” or “flutist” in the ceremony program, and was “hogging the mic”. admitted that

Just two years later, Mrs. Johnson, then a 30-year-old teacher, volleyball coach and mother of a toddler, suffered a major stroke that left her paralyzed and unable to speak.

Scientists on Wednesday reported remarkable progress toward getting her and other patients to speak again. In a breakthrough in neuroscience and artificial intelligence, implanted electrodes decoded Mrs. Johnson’s brain signals when she tried to speak her sentence in silence. Technology transformed her brain signals into written and spoken language, allowing an avatar on a computer screen to speak those words and display smiles, pursed lips, and other facial expressions.

research, Published in NatureExperts say it’s the first time that spoken words and facial expressions have been synthesized directly from brain signals. Mrs. Johnson chose an avatar that looked like her, and researchers used the sounds of her wedding toasts to develop the avatar’s voice.

“We’re just trying to get back to being human,” said Dr. Edward Chan, the team’s leader and director of neurosurgery at the University of California, San Francisco.

Mrs. Johnson, now 48, wrote to me, “It made me feel whole again.”

The goal is to help people who are unable to speak due to diseases such as stroke, cerebral palsy, and amyotrophic lateral sclerosis. For Mrs. Johnson’s implants to work, they must be wired from her head to a computer, but her team and others are working on a wireless version. Researchers hope that people who have lost their language may eventually be able to converse in real time through computerized pictures of themselves that convey tone, inflection, and emotions such as joy and anger. are doing.

“What’s really interesting is that the researchers were able to extract pretty good information about different aspects of communication just from the surface of the brain,” says Parag, a neurosurgeon and biomedical engineer at the University of Michigan. Dr. Patil said. Nature asked to review the study before publication.

Mrs. Johnson’s experience reflects rapid progress in this field.Just two years ago, the same team the study In this study, a paralyzed man nicknamed Pancho used simpler implants and algorithms to generate 50 basic words such as “Hello” and “I’m hungry” and then was displayed as text on the computer when I tried to speak.

Mrs. Johnson’s implant has almost twice as many electrodes, improving its ability to detect brain signals from speech-related sensory and motor processes associated with the mouth, lips, jaw, tongue and larynx. The researchers trained a sophisticated artificial intelligence to recognize not individual words, but phonemes — phonetic units like “oh” and “ah” — that could eventually form any word.

“It’s like a phonetic alphabet,” said project manager David Moses.

Mr. Pancho’s system produced 15 to 18 words per minute, while Mrs. Johnson’s rate was 78 using a much larger vocabulary list. A typical conversational speech is about 160 words per minute.

When researchers started working with her, they didn’t expect to try avatars or voices. But promising results “was a big green light, ‘Okay, let’s try something harder, let’s just do it,'” Dr. Moses said.

Cairo Littlejohn, a graduate student at the University of California, Berkeley, and one of the study’s lead authors, along with Dr. Moses, Shawn Metzger, and Alex Silva, said they decoded brain activity into speech waveforms to produce vocalizations. said to have programmed an algorithm to generate and Margaret Seton.

“Speech contains a lot of information that isn’t well preserved in text alone, such as intonation, pitch, and facial expressions,” Littlejohn says.

The researchers worked with a facial animation company to program avatars using muscle movement data. Mrs. Johnson then attempted to create expressions of joy, sadness, and surprise at high, medium, and low intensity, respectively. She also tried various jaw, tongue and lip movements. Her decoded brain signals were transmitted to the avatar’s face.

Through her avatar, she said, “I think you’re great.” “What do you think of my artificial voice?”

“It’s emotional when you hear voices that sound like you,” Mrs. Johnson told researchers.

She and her husband William are postal workers, in conversation. “Don’t make me laugh,” she said through her avatar. He asked how he felt about the Toronto Blue Jays’ chances. “Anything is possible,” she replied.

Advances in this area are so rapid that experts believe a federally approved wireless version could be available within the next decade. Different methods may be best for certain patients.

on wednesday, Nature also published research from another team Dr. Jamie Henderson, a professor of neurosurgery at Stanford University and leader of the team, said electrodes implanted deep in the brain would be used to detect the activity of individual neurons. The doctor said his motivation was his childhood experience watching his father lose his speech in an accident. He said their method may be more accurate, but it is less stable because it can change the firing patterns of certain neurons.

Their system decoded what participant ALS patient Pat Bennett, 68, tried to say from his extensive vocabulary at 62 words per minute. This study did not include avatars or voice decoding.

Both studies used predictive language models to infer words in sentences. Melanie Fried Oken, an expert in spoken language assistive technology at the Oregon Health and Science University, says the system not only matches words, but “finds new language patterns” by improving participants’ recognition of neural activity. said. She consulted on research at Stanford University.

Neither approach was entirely accurate. Using a large vocabulary set, individual words were decoded incorrectly about 1 in 4 times.

For example, when Mrs. Johnson tried to say, “We may have lost them,” the system decoded, “That name could be us.” But in almost half of her sentences, every word was decoded correctly.

Researchers found that people using crowdsourced platforms were able to interpret avatars’ facial expressions correctly in most cases. Interpreting audio content is even more difficult, so the team is developing prediction algorithms to improve it. “Our talking avatar is just a starting point,” said Dr. Chan.

Experts stress that such systems do not read people’s minds or thoughts. Rather, they are like baseball hitters, says Dr. Patil. They predict pitches “not by reading the pitcher’s mind, but by interpreting what they see the pitcher doing.”

Still, it could eventually be possible to read minds, raising ethical and privacy issues, Dr. Fried Oken said.

Mrs. Johnson contacted Dr. Chan in 2021. The next day, her husband showed me my paper on Pancho, a paralyzed man the researchers had helped. Dr. Chan said it was discouraging at first because she lived in Saskatchewan, Canada, far from her lab in San Francisco, but “she was persistent.”

Johnson, 48, arranged to work part-time. “Anne always supported me in doing what I wanted to do,” he said, including leading the local postal union. “So I thought it was important to be able to support her in this matter.”

She started participating in September last year. The trip to California takes three days in a fully loaded van, including lifts to transfer between wheelchairs and beds. They rent an apartment there and conduct experiments for researchers to make her life easier. The Johnsons have raised money online and in the community to pay for travel and rent for their multi-year studies, spending weeks in California and returning home between studies.

“If she could do it 10 hours a day, seven days a week, she would,” Johnson said.

Determination has always been part of her nature. When the two began dating, Mrs. Johnson gave Mr. Johnson 18 months to propose, but after Mr. Johnson “had already gone to pick out an engagement ring,” Mr. Johnson said he had “18 months.” He said he proposed on the very day of his eyes.

Mrs. Johnson communicated with me via email that was created using the more rudimentary assistance system I use at home. She wears her glasses with reflective dots pasted on them that point towards the letters and words on the computer screen.

Due to its slow speed, it can only generate 14 words per minute. But it’s faster than using the plastic dial, which is the only way she can communicate in her home. Johnson described the method as “she’s just trying to show me what letters she’s looking at, and I’m just trying to make sense of it.” what is she trying to say “

The inability to speak freely frustrates them. When discussing details, Johnson may say a few words and receive a response by email the next day.

“Anne has always been a talkative person all her life, a sociable person who loves to talk, but I’m not,” he said, but Ann’s stroke “reversed the roles and now I’m supposed to be the talker.”

Mrs. Johnson, who taught high school math, health and physical education, and was a volleyball and basketball coach, suffered a brain stem seizure during volleyball prep. After spending a year in hospitals and rehab facilities, she came home with her 10-year-old stepson and her 23-month-old daughter, but her daughter is the story of her mother. Johnson said he grew up with no memory of hearing it.

“It hurt so much not being able to hug and kiss my kids, but that was my reality,” Mrs. Johnson wrote. “The nail in the coffin was when I was told I couldn’t have any more children.”

Five years after her stroke, she was terrified. “I thought she was about to die,” she wrote, adding, “I knew the part of my brain that wasn’t frozen needed help, but how would I communicate?” .

Gradually, her stubbornness resurfaced. At first, she “didn’t move any of the muscles in her face,” she writes, but after about five years, she said she could laugh at will.

She was completely tube-fed for about 10 years, but decided she wanted to taste solid food. “It’s okay if I die,” she said to herself. “I started smoking chocolate.” She underwent swallowing therapy and now eats chopped and soft foods. “My daughter and I both love cupcakes,” she wrote.

After learning that a trauma counselor was needed after a bus fatality in Saskatchewan in 2018, Mrs. Johnson decided to take a college counseling course online.

“I had minimal computer skills, and as a mathematics and science person, the thought of writing a paper was terrifying,” she wrote in her class report. “Around the same time, my daughter was in ninth grade and was diagnosed with a processing disorder. I decided to.”

Helping trauma survivors remains her goal. “My goal was to become a counselor and use this technology to talk to clients,” she told Dr. Chan’s team.

When she first started emote in her avatar, “I thought it was silly, but I like how I feel like I have an expressive face again,” she wrote, adding that the exercise also allowed her to move the left side of her forehead. I added that it has become like this. first time.

She got something else. After her stroke, she “was so hurt when she lost everything,” she wrote. “I told myself I would never experience that disappointment again.”

Now, “I feel like I’m working again,” she wrote.

In addition, the technology allows her to imagine being in “Star Wars”. “I’ve gotten a little used to being shaken.”

You may also like

Leave a Comment

The US Global Health Company is a United States based holistic wellness & lifestyle company, specializing in Financial, Emotional, & Physical Health.  

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Copyright ©️ All rights reserved. | US Global Health