Pods of Science | Episode 4 | How to Predict Your Next Doctor’s Appointment Intro:Welcome. I’m your host, Jess Wisse. On today’s episode we’ll talking about how artificial intelligence could take your doctor’s care to the next level. Stay tuned to learn more. MusicJW: PNNL scientists have found a way to improve the accuracy of patient diagnosis by up to 20 percent! How? By using artificial intelligence. A PNNL project, called DeepCare, looked at ways to use AI to improve medical outcomes for patients. Meet the project lead, Robert Rallo. RR: I joined the lab three years ago, coming from Barcelona. My background is chemistry, but I was a professor in computer science for more than 20 years before joining the lab. My main area of expertise is machine learning and applications of machine learning in different areas, one of them being computational toxicology. The team working on this is different computer science scientists from PNNL, Khushbu Agarwal and Sutanay Choudhury. They are two computer scientists in the data sciences group at PNNL. We have strong collaborations also with the University Virginia Tech and Stanford and some of the students who have been summer students here at PNNL have been involved also in this type of biomedical work.JW: We asked Robert why he got into the field of computer science. And here’s what he had to say:RR: The fact that a computer is able to learn by itself from data is something that is really interesting for me, really intriguing, and what triggered my interest in this for me.JW: Robert and his team at PNNL created a new embedding approach. The approach seeks to capture and re-create the types of connections physicians do naturally, in their heads, when they apply a lifetime of learning and knowledge to the patient standing before them in the exam room.What’s embedding? Basically, it’s translation for computers. Using embeddings, computer scientists can take a piece of information that only humans can understand and then transform it into something a computer can use. RR: Medical concepts is, for instance, when you have a specific diagnose this is a concept. You have fever, you have high blood pressure; these are concepts. And then, the way in which a machine learning algorithm or a computer can process these concepts requires them to be codified in a certain numerical way. So one of the ways in which we are making this coding is by developing a continuous numeric representation of these concepts that somehow captures the similarities, the relationships between each one of these individual concepts. So this idea of somehow transforming this textual set of concepts information into a representation which is suitable for machine learning is the embedding process. And what we want is that, this numeric representation will convey the same semantics, the same information, than the original concepts.JW: One of the hardest parts about using AI in the medical field is the inability to combine multiple types of data. Think of all the information that’s captured when you go to the doctor. Now think of all the different forms it comes in. Computer-friendly data like blood work numbers or diagnosis codes are easier than unstructured data like chart notes or images from X-rays and MRIs.RR: Well everybody knows that it’s a known fact that understanding hand-written doctors’ notes is like impossible. (laughing) And I say this because my sister is a medical doctor. But no, I'm joking now. But essentially if we are looking at different types of information, you have structural information in which everything is well classified, well cataloged, and it’s very easy to use. And then you have all this unstructured information in which you have maybe recordings of the patients in an interview for something related to mental health. You can have the notesof doctor that can be written in different narrative styles. You can have different types of imaging data from x-rays to MRI. And each one of these mo