Fifty-three million U.S. adults, or 21 percent of the population, own at least one smart speaker, according to a 2019 survey conducted by National Public Radio and Edison Research. A March 2019 report predicts that the global health virtual assistant market, which includes smart speakers and chatbots, is expected to reach $3.5 billion in 2025. The promise of voice technology goes well beyond delivering general health information to consumers. It could serve as a bridge between healthcare providers and patients to facilitate remote diagnostics, care, and monitoring of therapeutic compliance. Everyday Health recently spoke with Sandhya Pruthi, MD, the chief medical editor and associate medical director for content management and delivery for Mayo Clinic Global Business Solutions, about the present and potential impact that voice technology can have on facilitating personalized healthcare and self-care solutions. On January 7, Dr. Pruthi will be speaking at the 2020 Consumer Electronics Show (CES) in Las Vegas about “The Surging Currency of Voice Healthcare.” When we built the Mayo First Aid skill, it was 50-plus everyday health situations — things like minor burns. This was an app that one would download to get access to the skill on Alexa, and then they would get advice. EH: So how is the content Mayo is offering on Alexa different today? Today, if someone asks a question on Alexa such as “What are the symptoms of lung cancer?” they get a direct response that says “According to Mayo Clinic… ." So, you’ll get that kind of content today. The user can ask a question directly without having to open a skill. EH: The amount of health information Mayo makes available via Alexa devices must have grown exponentially since the early days of the First Aid skill. It’s huge. Our comprehensive health information library on our website today covers over 8,000 conditions, procedures, and symptoms. We were able to take that entire library and build it out for Alexa-enabled devices. Obviously, we may not have touched on every topic. But to be able to deliver trusted, accurate, concise information on Alexa-enabled devices was exciting. It took a lot of work, but it’s been very well received. EH: How would you describe month-to-month consumer usage and overall reach? We’re seeing usage increases every month, especially in the United States and Canada. Our Alexa content is also out in Australia, Mexico, the United Kingdom, and India. EH: Another fascinating potential use of voice in healthcare is as a biomarker to detect patient health risks early on. What has Mayo been doing in that area? In a study published in July 2018 in the journal Mayo Clinic Proceedings, the authors hypothesized that we could use voice characteristics to potentially detect coronary artery disease. They recorded the voices of 138 patients who were scheduled to have a coronary angiogram. What they were doing was trying to capture patients’ emotional states during the time they were being recorded to see if there was any connection between the voice characteristics (by that I mean intensity and frequency of the voice) and the presence of heart disease. What they found was that the voice biomarker could potentially detect a risk of having coronary artery disease. We’re still learning a lot from this early work, but there was a correlation with heart disease findings on the angiogram. EH: What are the risks and challenges of this kind of biomarker-diagnostic approach? Obviously, this is going to be a huge area of further research — trying to understand when you make this type of a correlation and how accurate you are. You don’t want to make any mistakes. This is an area where we also have to look at large data sets to see how well you can pick up these voice parameters and make the correlation with disease. EH: Presumably voice biomarkers could be applied to more than cardiovascular health, yes? There are so many areas that could be better investigated in terms of voice as a biomarker for other disease states. For instance, using voice to detect depression or Parkinson’s or even autism. But there’s a lot of work to be done in that area. EH: What then is the next step in terms of how patients and providers use that info? The other part that’s going to be a challenge is if there is a detection of a risk of, say, hypertension or risk of stroke, how do you use voice then to access healthcare? You need to triage. Can voice as a technology take us to that next step? It’s very complicated in terms of trying to make sure that if you’re going to identify a disease using voice changes, then what do you do with that information next? EH: What will it take? It will require population-level validation studies. How much of this information do people feel comfortable with? Are patients satisfied, and will they continue to use it? EH: What’s next for Mayo and voice technology in general? Where we want to go next is to utilize algorithms. Think of a telephone triage nurse. Today you can call a nurse line at Mayo. We get a thousand patient calls a day to our call center saying things like “I have a cough,” and the nurse asks you questions and tries to triage to determine what may involve self-care or if you need to be seen at a clinic. If we take these algorithms and make them voice-enabled, I think that is where we can really bridge healthcare from home to the clinic. So, patients can ask the same questions, maybe of a chatbot, and get the responses they need. In essence this will be the future of what is going to be a personal health assistant concept. Today’s voice interface is: You ask a question, you get an answer. The next level is more of a conversational interface. To do that, what are the challenges you’re going to have to deal with? One is you don’t want to make mistakes. You want it to be a contextual knowledge base. I’ll give you an example: [A patient asks,] “Why does my shoulder hurt?” You need more question/answer exchanges to diagnose. Just like I would today in a clinic. If a patient were to ask me a question, I’d want to have more of an interaction with them. How can you solve a problem using voice, and how can we really support the health needs of our consumers? Even more exciting is how can we embed this type of technology into our care teams. We’re looking for ways to bring our voice technology into our usual day-to-day experiences and workflows so that we can do a better job of bringing the clinic to the home. Another challenge is: How do you overcome those systems to make this more user-friendly? EH: Can you share any personal experiences about how voice technology has (or has not) met your expectations so far? I loved having my Alexa until my millennial 26-year-old said, I’m taking it. When I had it at home, it was so much fun using it to ask every question. Not just about healthcare, but also navigating transportation or weather or whatever we were doing.     Asking Alexa about the score from the recent Vikings football game. But what the user is really looking for is an engaging interaction. How does Alexa know me? How does the voice device know me so it’s empathic to my needs? So the future of the intelligent voice interface is not just giving me a single answer to a question, but I think it should be more of an exchange of information. That’s where I would like to see the voice technology enhance the patient experience. EH: Do you see this technology serving younger, more tech-savvy users as well as older consumers who aren’t typically early adopters? I think voice technology is going to be increasingly well received by the older population who have trouble typing because of arthritis or visual disability when they’re using a mobile device or desktop computer. I tested this technology with my own parents. I was just visiting them in Winnipeg, Manitoba, and I asked them to speak to their mobile device and ask a question about the shingles vaccine: “Why don’t you ask Alexa a question, rather than trying to type a question in the search box? Ask Alexa who should get the vaccine.” I think it’s going to change the demographics as to who is using technology. I imagine voice technology usage spanning all age groups. Millennials. Children. And the older population.