Digital Pathology Podcast
Digital Pathology Podcast
191: Hallucinations, Agents, and AI in Pathology
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Clinical Artificial Intelligence in 2026. Accuracy, Education, and Guardrails
Artificial intelligence is evolving fast in medicine. But how accurate is it. And are we building it safely?
In this episode of DigiPath Digest, I review five new studies shaping digital pathology, radiology, burn diagnostics, and agent-based large language model systems. We discuss accuracy gains, hallucination filtering, education challenges, and why safeguards are essential before clinical deployment.
Clear. Practical. Evidence-based.
⏱ Topics & Timestamps
[00:02] Introduction
Weekly journal club on digital pathology and artificial intelligence.
[05:13] Hallucination Filtering in Radiology
Using Discrete Semantic Entropy to detect hallucination-prone responses in Vision Language Models.
Accuracy improved from 51.7 percent to 76.3 percent after filtering high-entropy answers.
[15:04] Artificial Intelligence in Pathology Training
Supervised use during residency.
Balancing artificial intelligence adoption with preservation of morphological analysis and critical thinking.
[20:12] Colorectal Cancer Lymph Node Detection
Two-stage classification and segmentation model in Whole Slide Imaging.
Recall 1.0. Specificity 0.935. Dice coefficient 0.818.
Artificial intelligence as a second opinion.
[25:04] Burn Depth Prediction with Artificial Intelligence
Tissue Doppler Elastography and Harmonic B-mode ultrasound combined with artificial intelligence.
90 to 95 percent accuracy in human subjects.
[31:20] Agent-Based Large Language Model Systems
OpenManus and Manus evaluated in clinical simulations.
Up to 60.3 percent accuracy. High computational cost.
89.9 percent of hallucinations filtered by safeguards.
[40:08] Patient Access to Pathology Images
Why viewing pathology slides can empower patients and improve communication.
Resources
00:00:02
Aleks: Good morning my trailblazers. Welcome to the Digipath Digest, our weekly journal club and uh where we discuss digital pathology and AI in medicine abstracts. Whenever you join me um on this live stream, let me know in the chat that you're here. I'm going to say hi in the chat and let me know where you're tuning in from. Welcome to the Okay, I see you joining. I always love to see that that you're here because when I join it's always uh there's this little lag and I'm like,
00:00:43
are they going to come? They come. Every week you come. Thank you so much. So, it's 6:00 a.m. in Pennsylvania and I have my Trailblazer water and coffee. Um and before we dive into the papers, uh let me give you a few updates. So the next conference coming up is going to be U.S.CAP or USCAP. I never figured out how to pronounce it. Actually, I should uh ask the organization itself. How do how do you pronounce this? Is the United States and Canadian Association of Pathology. Now I have to Google that or you can let me know in
00:01:27
the chat. uh but I don't know if it's US Cup or ASCOP and there is going to be a collaboration with a sponsor and today um I'm not going to tell you who that who that's going to be but I'm going to tell you what we're going to do together. Uh so there is going to be a book giveaway. So or if you already have the book you can bring it and I'm going to be signing the book. So um if you want the digital copy let me share the QR code with you. So this book digital pathology
00:02:04
101 this is the book that you need to start or continue your digital pathology journey if you have like an area in a specific expertise. So when I was starting my specific expertise was image analysis and I did want to learn about all the other things about scanners about um whatever we have in this book let me tell you what we have we have the introduction obviously what is digital pathology if you're brand new that's going to be for you uh we have new trends challenges benefits h then image analysis and AI
00:02:40
applications of digital pathology and the role of whole slide imaging in toxicological pathology which is my field. Uh this book is going to be there at the booth. We're going to be with our sponsor for US Cup. We're going to be streaming and we're going to be doing um a lot of cool stuff and I'm going to reveal you who that's going to be and which booth it's going to be next week or probably in an email before. So, if you are not on my email list yet, um, grab the book through the QR code and
00:03:12
you're going to be on this list and you're going to know um with whom I going to be working at US Cup and what else we're going to do because um we may have some beautiful stickers. Uh, and the book giveaway is going to be um every day. I think we're ironing out the details. I'll let you know next week. So um that then ah so now the question is is the second edition going to be ready before US cup and you can vote is she going to make it or is she not going to make it. She's on chapter three out
00:03:49
of five right now. So half of chapter 3 is already updated. US Cap starts uh very soon like in three weeks I think. Um it's going to be the 21st, 22nd, 23rd. It's going to be the week of the 22nd of March. Um, so Monday, Tuesday, Wednesday of that week. Is she going to make it to update the version? Is she going to bring the old version or the new version to the booth? And obviously I want to bring the new version, but if I don't, and the old version is super valuable as well, and I'm going to let you uh know
00:04:32
when the new version is available, let me know where you're tuning in from. I don't see anything in the chat. Say hi. And I know my regulars are here for sure, but they're not saying hi. Come on, guys. Um, okay. The So, cheer for me to be ready with the update. Uh that's you know like an external uh deadline pushing against and I can do it. Uh cheer me so that I do it. Um and let's do the first abstract and then I'm going to give you a few more updates cuz several of you are already waiting here.
00:05:13
So let's not wait any longer. We have a pretty cool paper uh to start with. Right here. Oh, Maryland. Hello. Greetings to Silver Spring. Silver Spring is super close to me. It's like less than two hours. 1 hour. Not even 1 hour and a half. So, whoever is here live, leave me a comment on whichever platform you're live. And let's go to the papers. And the first one is pretty cool. It's hallucination. It's uh about radiology, but it doesn't really matter cuz it's about hallucination filtering
00:05:56
in radiology vision language models using discrete semantic entropy. Couple of concepts that we need to explain. Um but we have this vision language models. That's important. >> Does this work as well? >> Yes. Okay. So, what is this uh discrete semantic entropy? Uh they use this method to reject questions likely to generate hallucinations. uh and they were asking can can this improve the accuracy of blackbox vision language models in radologic image based visual question answering uh VQA so
00:06:39
these are like concepts that people are using in this space uh to check um let me know where you're tuning in from I see some comments hi Amina fantastic to see you so they did okay so uh let's start with discrete semantic entropy entropy so they evaluated uh different data sets the identified data sets there's a VQA med 2019 benchmark this is 500 images with clinical questions and short text answers and diagnostic radiology data set h and here they had 60 computer tomog photography scans, 60 magnetic
00:07:27
resonance images, 60 radioraphs, and 26 angoggrams. Um, and I'm going to put the star here because I want to tell you something else after we uh review this one about a book that I'm writing, a different one. Um, so and they had ground truth corresponding diagnosis and they evaluated GPT0, GPT1. Um, and so this is the the important thing. They answered each question 15 times, right? So they had these question data sets and images that they were asking questions about and they let the model answer it 15 times uh
00:08:09
using a temperature of 1.0. And the temperature is how um how much variability you allow the model uh to give you in the answers if you reduce um so then and baseline accuracy was determined using low temperature answers uh 01. So low temperature is when they all cluster together and kind of are the same. So 15 times you would get the same answer. And uh temperature one when you allow variability. And then they grouped um responses that had equivalent meaning. And then they calculate recalculated
00:08:55
accuracy excluding questions uh with DSE over 0.6 six or over 0.3. Um, and what happened? So, the discrete semantic entropy was uh like questions that had like this high entropy and the uh answers were excluded and the results across 76 image question pers. The baseline accuracy baseline was 51.6. 7 for GPT0 and 54.8 for GPT uh 41. Not that great, right? Accuracy of a little bit over 50%. We would like more than that. H but after filtering out high entropy questions, so over 0.3 accuracy on the remaining questions was
00:09:52
76.3 for GPT0 and 63.8 uh for GPT1. So basically when you eliminated the questions that could give you all these different answers you increase the accuracy and uh the accuracy were uh gains were observed across both data sets and largely remained statistically significant. So uh the conclusions that they the conclusion that they are drawing is that uh discrete semantic entropy enables reliable hallucination detection in blackbox vision language models by quantifying semantic inconsistency. So,
00:10:36
can the DSE identify hallucination prone questions and improve the reliability of blackbox vision language models in radiologic uh image-based VQA um so visual question answers this is like the question uh but that's not really what they what they were looking uh for because they did not evaluate the questions a priority. They evaluated the answers. Um so I had a little bit trouble understanding this and had to dive a little deeper. Uh so so what they did they let they they had a set of questions h and they let the model
00:11:22
answer and then by the um how variable the answers were they could then say oh this qu the answer to basically this qu well they they could say that okay the answers are too variable so don't trust the model on this question but they didn't really check how to reframe the question they were not checking okay Can we do better prompting? Uh but they say that integrating the uh DSE as a blackbox uncert uncertainty filter enables selective answering and explicit uncertainty display for radiology vision
00:12:02
language tools supporting safer diagnostic use uh mitigating hallucinations and improving clinicians trust in AI assisted image interpretation. That is the case. If we're going to see like 15 different answers, one says, "Oh, pneumonia." The other one says uh lung carcinoma, non small cell lung cancer, and the other says nothing. Then you're going to not believe the answer to this question. But like, do you need to look at 15 different answers? That's a little bit of an overkill. But still, um,
00:12:35
they figured out a method how to check if the model is hallucinating. And uh probably downstream you can then have a specific prompt engineering strategy or question strategy or a set of questions that are built with this low entropy in mind. So I think this is amazing like this work that people are doing is really cool. Oh, we have guests from Zambia. This is very exotic. I don't think we had uh guests from Zambia so far. We did have other African representatives but not from Zambia yet. So um another thing that I wanted to
00:13:15
give you an update on after each of these I'm going to give you a little update not to bombard you with updates. Um but so so I was like diving a little deeper to understand this abstract and there is an option for you to dive deeper as well that does not replace reading the paper but can help um let me give you a code for that. Um these are uh AI powered paper summaries that I started posting on the podcast. This is a membersonly thing. So, this is um a a paid subscription, but if you're
00:13:50
interested uh in listening to the summaries of all the papers that we're doing in the uh DigiPath Digest, they're all there from last week. I started doing this last week. I'm going to put all the ones that are available. Some are gated and not available, so it's not always going to be 100%. This is a super affordable subscription. Um you can check this out. And we're going to be doing this on YouTube as well. But this is AI generated vetted by me. Uh meaning I listen to all these uh summaries and I
00:14:21
make sure that there's no nonsense. I listen to them also to basically go deeper into the papers. I do not always read the full paper. Sometimes I do. So that's something I created for you as well. And once I have it on YouTube, I'll let you know. And in the meantime, let's proceed to our next paper which is something we probably all are thinking about because it is about the education yes of future pathologists. education of future health care professionals um in the world that is powered by AI in
00:15:04
one way or another. So uh this is an article in French. So let's see if I'm going to have a summary of this one but um the they are discussing impact of digital technology and artificial intelligence on the training of pathology residents. We are talking about benefits, limitations and educational prospects which is like anytime you go to a conference, anytime you meet a professional pathology or now probably any healthcare society, this is going to be a topic that people discuss. This is going to be a topic that people
00:15:38
have opinions on. This is going to be a topic that uh students and those in training um like have questions about and want to see specific resources. So um what they say here is that young doctors who are highly exposed to digital technology in their daily lives uh which is something I like say or said several times that the people that are now in training they're digitally native digitally native for viewing images digitally digitally native for you know chatting with chatbots um they come with
00:16:14
a lot of uh baseline knowledge on digital technology ologies right and digital pathology as such provides an interactive and collaborative training medium facilitating learning and knowledge sharing. So this is like we know digital pathology, we love digital pathology. Uh we trained on digital pathology because uh digital pathology association has ground rounds on digital images. There are YouTube videos that show hisystologology pathology um on digital images. People know that right? They already know how
00:16:50
to navigate these images. But there is AI. the use of artificial intelligence during residency seems to raise more questions about the balance between benefits and risks in terms of skill acquisition. Um and that is the case also because this uh AI definition AI landscape expands so fast. So um when I was writing the book uh the the like new super cool best thing were convolutional neural networks in image analysis for pathology. Now this is also part of the AI that we're using but we now h as we've seen in the
00:17:32
first abstracts we have vision language models we have foundation models uh the the transformers entered the scene entered the scene so it's expanding and a multimodality is entering the scene so like how do you learn about it and you know I have courses I have a concept um how this should be taught doesn't mean that this is the only way of acquiring this knowledge um that's something that worked for me. But um what they say in uh this particular paper is that supervised use by senior staff uh
00:18:06
supported by theoretical teaching uh is something that would enable future pathologists to master artificial intelligence tools without compromising the acquisition of morphological analysis and critical thinking skills. So that is a challenge because like okay I guess I would be senior staff as well if I was if I was working at the university. I don't think of myself as senior staff. uh but I probably would be so that would be like somebody like me who already has some expertise guiding the people in a way that enables them to
00:18:48
later guide themselves when new technologies are um entering the scene and um but also trying to figure out how they keep acquiring the morphological analysis skills that a pathologist needs and also critical thinking and there have been publications showing us that um use of AI will lead to deskklling in certain areas. Then on the other hand, use of AI uh when you have um AI aided uh diagnostics is going to show you so many examples. So now you're like acquiring this knowledge faster. So balance kind of a philosophical
00:19:32
discussion. If you have any opinions on that, what is missing? what should be there and put a comment in the chat even if you're watching the replay because this is an ongoing discussion uh and people are basically looking for input from everybody um involved in this field and that's going to be dynamic. You know there is going to be some curricula courses that will need to be updated probably like every other year. But let's go to our next paper which is region based segmentation of
00:20:12
lymph node metastasis in wholeside images of colorctctor cancer. This is a pilot clinical study. Um, and let me put a heart next to this publication because it's such a this is like a nobrainer, lowhanging fruit use of digital pathology AI implementation that can basically improve any uh diagnostic workflow where you need to evaluate uh lymph node metastasis. There was uh so you probably have heard about the chameleon challenges in 2016 and 17. There was um this image uh in in the computer vision
00:20:56
medical space. They organized these uh competitions for different um computer scientists, different companies to do something on medical images. And one of these uh competitions was the chameleon challenge where the uh epithelial cells as chameleons are hiding in lymph nodes and then you need to detect them because that means that defines okay is there a lymph node metastasis or not and AI is very good in doing that. So let's see what happened in this particular study. It's I love this abstract because it's
00:21:38
so straightforward like we did this. This study presents a two-stage computer vision model designed to detect colurectal cancer metastasis in whole slide images of lymph nodes. Like I love this study design um and the methods. It was a classification and segmentation pipeline optimized for both accuracy and efficiency. Uh the model was trained on 108 whole slide images and evaluated on 554 whole slide images collected from two institutions and we also had two scanners like our perio AT2 and Hamamatsu Nanoumer S360. So we have uh
00:22:16
different institutions we have different scanners. So we're trying to uh account for the domain shift. So basically variability um stemming from different places like an institution or like a scanner. So like really nice and uh the classification model achieved recall of one and a specificity of uh 0.935 like super high. While the segmentation model reported the dice coefficient of uh 0.818 818 like this is also super high. Um and pathologists appreciated the model precision in distinguishing
00:22:58
solitary cancer cells from histicides reducing the need for peer consultation. So histtoytes um which are the tissue microfasages they may look often look similar to epithelial cells and there's this question okay is this a metastasis is this an epithelial cell or not and AI was pretty good in distinguishing this and then the pathologist didn't need to consult that much they still could and they think it's a valuable second opinion enhancing enhancing diagnostic confidence. Um, so super straightforward use. Um, and
00:23:39
what they conclude is that AI can streamline workflows, improve diagnostic accuracy, and support personalized treatment planning. H, and this integration of AI into pathology workflows can redefine diagnostic standards while maintaining the critical role of pathologists. And we talked about that last week. If you have not um seen the live stream or not listened to the podcast last week, we had a a publication about what do patients think about the use of AI uh in medicine in pathology in Europe and they said if a
00:24:14
doctor looks at it and if it's validated then it's fine like if it makes the diagnosis faster, better, more reliable, uh leads to a more personalized treatment it's okay uh use AI but have the pathologist have the doctor in the loop so that was a critical uh message that they gave in this other publication and we want to keep it that way I want to keep it that way let me know your thoughts let me know if you have any questions And let's see what else we have today. These references are super long. Okay,
00:25:04
this is an interesting one. Automated non-invasive burn diagnostic system for healthcare using uh artificial intelligence. And I I love the um acronym Ambush. Ambush. Where does this come from? Ambush AI. automated non-invasive burn burn diagnostic. I don't know. Let me know in the comments if you see like how this ambush acronym stems from this title. Maybe I'm just missing something very obvious. Um, and I have a few longer comments that I'm going to show after we we look at this one.
00:25:52
I like this one for a couple of reasons. So first of all, this is uh RA. This is a combination of modalities. That's uh one of the things that I like. So uh the they wanted to develop technology to predict burn wound depth using a combination of an FDA approved ultrasound modalities and interpretation of these images using artificial intelligence. So we have imaging, we have ultrasound with uh FDA uh cleared devices approved modalities and then we have AI using these images but not only these images. Um so how does this
00:26:30
usually work? So physical examination by burn surgeon is a diagnostic gold standard to determine the need for burn surgery. And the problem that they have is distinguishing between deep partial and thirdderee burns to determine the need for surgery. So this is like their problem and uh the ultimate diagnostic challenge and the accuracy for this process is 76% for burn experts and 50% for non-experts. So I don't know uh how many burn experts we have in this uh field but usually expert you have less
00:27:10
experts than non-experts in any discipline. So um if non-experts are looking at it at 50% it's 50% and accuracy. So let's see if AI can do better. Spoiler alert it can. uh but another thing that I like about this uh paper is uh they worked with a pig burn model first and I'm a veterary pathologist and veterary pathologists are are very much um translational scientists so I um I see this translational aspect of research here they had a pig burn model they had 12 pigs this was used to develop the
00:27:49
initial AI framework so here we have like how veterary pathology ology veterary u medicine supports something that later can be used um in patient treatment and uh I got this question and I was recently interviewed for the pathologist this is a um a publication um that supports pathology and pathology laboratories. I was uh speaking at a conference in London, digital pathology and AI conference by global engage and they uh asked me this question like how can veterary pathology support anything that later happens with patients and
00:28:27
there are like multiple aspects of that but this translational aspect is one of these things. Uh so you had an animal model of a human disease. You developed an initial framework and then it was subsequently tested in non-randomized prospective study of thermal burn human subjects. Um and there were 30 subjects in this study and they used images from tissue Doppler elasttography imaging TDI to measure tissue stiffness and then harmonic B mode ultrasound to identify anatomic landmarks and also digital
00:29:01
photographs were collected and not only this biopsies were obtained from five subjects so not all of them these subjects who went to uh O for the brideman and this was ground truth for AI image interpretation. So when you look at the numbers here, they're not that high, but um then uh the results show that the AI algorithm identified third degree burns in pigs with 100% accuracy. Um obviously we only have 12 pigs, so you know, if we had bigger number, maybe we would not have 100% accuracy. Um but it's still pretty high.
00:29:45
Well, the highest it cannot go higher than that. Um and then the AI method achieved a 90 95% accuracy in deterine in identifying third degree burns in humans. So conclusion uh these results indicate uh that the strategy to use AI interpretation of BMOD ultrasound and TDI images to increase diagnostic accuracy in predicting burn depth is feasible. And uh how cool would it be to give this tool to the non-experts that are trying to do their best job to increase their uh accuracy and also have a human in the loop like we are always
00:30:25
saying that um that this is what we want right let me see I think this is the last one for today no we have something yeah we have here like something that introduces doubts um benchmarking large language models based agent systems for clinical decision tasks. But in the meantime, um, first, thank you so much Sean for the for the explaining of ambush automated uh, burn diagnostic system for healthcare and BU. But where is the burn? Okay, BU system. Thank you. That was too complex for me. Thank you so much. Um and uh the
00:31:20
the introducing AI in into pathology training yes it is coming um and these high level institution need to get ahead of it. Yes, very much like it needs to be part of training because if it's part of training then uh it's part of pathologists uh healthcare expense experts doctors being involved in how it develops for the future because if people uh doctors come in without any knowledge of this then they are not part of the conversation. So regardless that it needs to be updated every year or
00:31:57
whatever every other year it needs to already be introduced at the level that it can be introduced and I'm always like um have this philosophy of something is better than nothing. So you know something like a book for example reading a book um that is introducing you to these concept is better than not doing anything or waiting for the perfect course. And um by the way who does not have the book just get the PDF for free through this link that I'm showing you right now on the screen. Uh if you don't have the
00:32:35
book yet, get it because this can be your first step and then you decide whatever is going to be your next step. Which resource uh are you going to use as the next one, right? And we have some and there are others. So you choose once you start. Let's go to our and we have a publication that's worth that's worth. Okay. So, this is the the ambush automated um system for burns. And where did my paper go? Yes, here it is. Okay, so not all is roses with AI, my friends and uh my trailblazers. And you probably already
00:33:21
know that that there is no black and white. There is a technology that we're trying to figure out what how it can benefit patients. um from you know basic science through translational to the clinic and in this last paper for today there was um a benchmarking exercise undertaken a large language model uh based agent systems for clinical decision tasks and agentic AI I think it's still like the new kid on the block what are these agents agents um in short are AI programs that can coordinate other AA programs. So, uh
00:34:03
agentic artificial intelligence systems they are designed to autonomously reason, plan and invoke tools uh have shown promise in healthcare. So like agent would is going to be something that can maybe like retrieve your emails like in the consum consumer AI retrieve your emails and then like prioritize them for you. So in healthcare maybe that's going to be something that's going to prioritize cases for you. Um but of course we need to do systematic benchmarking uh and it's limited in the for for real
00:34:36
world applications. So they evaluated these two systems uh two such systems opensource open manus built on metal lama 4 and extended uh this one was the open source extended with medically customized agents and manus. So we have the open manus and manus a proprietary agent system employing a multi-step planner exeutor verifier architecture. I hear this planner I think there are agents that can like plan your uh trips or book your Airbnb and like all these things. So there was some um medicine specific planner exeutor ver verifier
00:35:16
architecture which is like an official architecture in this space. Uh and both systems were assessed across three benchmark families. agent clinic, a stepwise dialogue based diagnostic simulation, med agent bench, a knowledge intensive medical uh Q&A data set, and then I like the name, humanity's last exam. Humanity's last exam, hle. Yeah, like um yeah, humanity's last exam. A suit of challenging texton only and multimodel questions. And despite access to advanced tools like web browsing, code
00:35:57
development and execution, and text file editing, agent systems yielded only modest accuracy gains over baseline LLMs, large language models, um reaching 60.3% and 28% in agent clinic medqa. And this um m I m I the other one. Which one was that? Um here med agent uh benchmark and 30% uh sorry 30% on med agent benchmark and 8.6% on the humanities last exam. So I guess uh AI is not going to be taking our last exam. But like can you like how low is that? Very low. Um so multimodel accuracy remained low. Uh
00:36:59
like below 30% 15 um.5 on multimodel um humanities last exam 29.2 to on agent clinic and the resource demands increased substantially with over 10 times token usage and two times latency h and 89.9% of hallucinations were filtered by in aent safeguards I want to highlight that one and I'm going to tell you a story about this one uh hallucinations remain prevalent h and these findings revealed that current agentic desire designs offer modest performance benefits at significant computational and workflow
00:37:44
cost. So here's a lesson for uh Dr. Alex meaning yours truly that it's so I always get super excited about these new technologies and when I learned about agentic AI I'm like oh my goodness now you don't have to do all this like mundane things of opening windows looking through databases all this stuff like let's just replace everything with agents but uh in this decade in digital pathology I already learned that I need to keep my own enthusiasm in check because this already started when deep
00:38:23
learning entered the scene and I thought oh like the classical computer vision is going to can throw it into the garbage please don't uh so I already know myself um the safeguards I wanted to tell you about the safeguards there's a story about that um so these safeguards uh were built in into this agentic system so um this was something that was mentioned at the NCCN AI in cancer care summit last year that these tools need to have safeguards and these are basically like um guard rails rules that whatever the answer of this
00:39:03
um model or whatever the the behavior of the model is if it's outside of these guard rails it's filtered out. It's not going to be used as uh an answer to anything or it's not going to be used for decisions. So that's like it has to be an built-in feature of uh the AI tools that we're using for medicine. For example, when I use these tools for um content creation or anything I'm writing uh paper writing where you always now have to uh disclose that you used AI and I use a lot of it. Uh but basically you
00:39:38
can start with hey this is what I'm writing. This is what you are allowed to do. This is what you're not allowed to do. You're not allowed to invent. you're not allowed to go outside those of verify web sources. You always need to site and things like that. So you can build this in into your um AI use as well. H but yeah agents are not there yet. Let's start working on bare agents. So the the technology the capabilities are there but the there is a little bit of a gap between the technology and the
00:40:08
practice. Um, let me see if I have any more updates for you. Oh, yes. I wanted to tell you about a conversation I had recently with uh Michelle Mitchell, a patient advocate. She was also speaking at Path Visions last year. for Pet Visions is the uh annual conference of the digital pathology association and last year's was very much focused on patients and I'm writing this book um it's available for pre-sale and I want to share a post with you. I'm like softly starting to announce it. It's not like Garrett yet
00:40:48
in full blow, but um let me share a post I posted on LinkedIn. Okay, here um you got the report, but did anyone show you the images? So she uh was a breast cancer patient and then was involved in uh cancer care and patient uh care of different p patients and um the she like knew a lot about the treatment went through treatment. She already uh like was involved in patient advocacy and 11 years or 12 years and I'm going to invite her to the podcast after her diagnosis, after like being done with
00:41:43
the treatment. She saw her um pathology images. She saw the the slides even though like she already like had the knowledge, had the report, uh explained the report to herself. She says that looking at the images and realizing like what was happening in her tissue was the um what very powerful stimulus to basically change her life. She lost weight. She like started taking care of her health in a totally different way even though it was already 11 or 12 like over a decade after she was diagnosed after treatment. um seeing
00:42:26
these images gave her a different level of understanding. So this is going what this book is going to be about. It's going to be about how to uh get access to your images and how to talk to your pathologist, how to develop a relationship um and this conversation about what's happening in your body with your pathologist with the images. Um and something else that I wanted to show you because uh we have a discussion on LinkedIn about it and obviously one of the points is like oh well this is a
00:42:55
specialized imaging you're not going to patients may not understand it giving them the report which is already the case like you are uh entitled and it's a mandate to give your pathology report in the patient portal even before often you talk to your clinician But uh there's always this hesitancy. Um these are specialized images like when you break a wrist uh which I had the honor of breaking some time ago. You look at the X-ray and you see uh like you see the break. It's intuitive.
00:43:32
But pathology image is not intuitive. Well, nor is a magnetic resonance image or any of these ultrasound images that are not so intuitive. And yet we get our radiology image images or access to these images. So um we have the right to also see our pathology images and this is what this book is going to be about. So if you're interested um follow on social media and check what's uh what the development h obviously before you escap we are focusing on this book digital pathology oneonone all you need to know to start and
00:44:09
continue your digital pathology journey one last time I'm going to give you the code to scan it if you have not yet uh the PDF of this book is free and at USCAP there is going to be an option to get uh there's going to be a limited number of copies that you can basically get. There's going to be a book giveaway. If you already have the book, I'm going to be there. I'm going to be signing the books. And um if you are planning on going to Yuskup, just send me a message wherever. Leave me a
00:44:38
comment uh under this live stream. Send me a message on LinkedIn, mostly on LinkedIn, or send me an email if you're on my email list. And I would love to see you there. I would love to say hi in person. And I hope to see many of you there.