The race to implement artificial intelligence (AI) technology in meaningful ways is fiercer than ever. Specifically, generative AI has recently taken the world by storm, creating entire domains of applications, technologies, and potential value.
JP Morgan Insights recently published an article with title “Is generative AI a game changer?“Generative AI — the category of artificial intelligence algorithms that can generate new content based on existing data — has been hailed as the next frontier in industries ranging from technology to banking to media.” JP Morgan’s Asia Pacific Technology , Media and Telecom Research co-head, Gokul Hariharan, said: ”—Paving the way for disruptive innovation.
Undoubtedly, tech companies want to be at the forefront of this innovation.
Earlier this week, Google announced its long-awaited next step when it comes to generative AI. According to Google’s official blog, keyword, Sissie Hsiao, VP of Products and Eli Collins, VP of Research introduced Open access to Bard. This is an experiment that allows users to interact directly with Google’s generative AI platform and share feedback accordingly.
The author explains: […] Use Bard to boost your productivity, accelerate your ideas, and spark your curiosity. Ask Bard for tips on how to reach your goal of reading more books this year, explain quantum physics in simple terms, and outline blog posts to spark your creativity. you can ask. We’ve learned a lot from testing Bard so far, but the next big step in improving it is getting feedback from more people. ”
The article also explains the concepts behind Large Language Models (LLM), a technology that powers the system. Newer, more capable models over time. This is based on Google’s understanding of quality information. LLM can be thought of as a prediction engine. When given a prompt, he selects one word at a time from the possible next words to generate a response. A certain amount of flexibility is considered because choosing the most likely option all the time does not lead to a very creative response. It continues to be confirmed that the more people who use LLM, the better he can predict which responses will be helpful. ”
LaMDA, which stands for “Language Model for Conversational Applications”, is Google’s breakthrough in building adaptive conversational language models trained on advanced dialogue and the nuances of human language. Now Google is embracing this breakthrough iteration with Bard, hopefully shaping the technology into something useful and valuable to users.
Undoubtedly, this technology could have a huge impact on healthcare. The most obvious application is that with a properly trained and tested model, patients may start seeking medical advice and recommendations from the system, especially if the conversational interface is robust. Of course, this should be approached with caution. Models are only as good as the data they were trained on, and they can still make mistakes.
The author of this article explains: For example, as they learn from a wide range of information that reflects real-world biases and stereotypes, they can show up in their output. We may also provide inaccurate, misleading or false information while presenting it in confidence. For example, when asked to share some suggestions for simple houseplants, Byrd presented a compelling idea, but got a few wrong, like the scientific name for the ZZ plant. They give an example of how the system suggested an incorrect scientific name for the Zamioculcas Zamiifolia plant.
But if done right, it could very well enable medically literate conversation as a way to help doctors and other professionals develop diagnostic plans and bridge patient care. .
At scale, the ability to train such intuitive models presents a great opportunity to extract robust insights from your data. Healthcare is a trillion dollar industry that generates terabytes of data annually. By layering advanced artificial intelligence and machine learning models on top of this data, we may have great opportunities to understand this information better and use it for greater benefit.
Indeed, there are many ethical and safety issues to consider about AI in general and generative AI in particular. Such products present a number of risks that technology companies must address. The risks range from hateful speech and language that can be exploited to the generation of misleading information, which is especially dangerous in medical settings. Without a doubt, patients should receive medical care only from trained and licensed medical professionals.
Nevertheless, Google and other companies creating such sophisticated tools have great potential to solve some of the world’s toughest problems. Therefore, they also have a significant responsibility to create these products in a safe, ethical, and consumer-centric manner. But if done right, this technology could change healthcare for generations to come.
The content of this article is not by any means implied to be, and should not be relied upon or substituted for, professional medical advice, diagnosis, or treatment. This content is for informational purposes only. For medical advice, consult a trained medical professional.