
Digital Pathology Podcast
Digital Pathology Podcast
120: DigPath Digest #21 | AI's Role in Prostate & Breast Cancer Diagnosis and Collaborative Annotation Tools
Welcome to the 21st edition of DigiPath Digest!
In this episode, together with Dr. Aleksandra Zuraw you will review the latest digital pathology abstracts and gain insights into emerging trends in the field.
Discover the promising results of the PSMA PET study for prostate cancer imaging, explore the collaborative open-source platform HistioColAI for enhancing histology image annotation, and learn about AI's role in improving breast cancer detection.
Dive into topics such as the role of AI in renal histology classification, the innovative TrueCam framework for trustworthy AI in pathology, and the latest advancements in digital tools like QuPath for nephropathology.
Stay tuned to elevate your digital pathology game with cutting-edge research and practical applications.
00:00 Introduction to DigiPath Digest #21
01:22 PSMA PET in Prostate Cancer
06:49 HistoColAI: Collaborative Digital Histology
12:34 AI in Mammogram Analysis
17:21 Blood-Brain Barrier Organoids for Drug Testing
22:02 Trustworthy AI in Lung Cancer Diagnosis
30:09 QuPath for Nephropathology
35:30 AI Predicts Endocrine Response in Breast Cancer
40:04 Comprehensive Classification of Renal Histologic Types
45:02 Conclusion and Viewer Engagement
Links and Resources:
- Subscribe to Digital Pathology Podcast on YouTube
- Free E-book "Pathology 101"
- YouTube (unedited) version of this episode
- Try Perplexity with my referral link
- My new page built with Perplexity
- HistoColAI Github Page
Publications Discussed Today:
📰 Can PSMA PET detect intratumour heterogeneity in histological PSMA expression of primary prostate cancer? Analysis of [68Ga]Ga-PSMA-11 and [18F]PSMA-1007
📰 Leveraging explainable AI and large-scale datasets for comprehensiv
Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!
 DigpPath Digest number 21, already 21 weeks of reviewing digital pathology abstracts. Thank you so much for being here for the 21st time. If there is nothing else you want to do to step up your digital pathology game this year, just join these live streams and you're going to see progress. So let's dive into it.
Learn about the newest digital pathology trends in science and industry. Meet the most interesting people in the niche, and gain insights relevant to your own projects. Here is where pathology meets computer science. You are listening to the Digital Pathology Podcast with your host, Dr. Aleksandra Zuraw.
 Welcome, welcome my digital pathology trailblazers.
Conference Recap and Upcoming Content
If you have not been to the two Path Visions, that was November 2024, then you can get the recordings if you would want to learn what happened there.
Also if you want to Check out the vibe of the conference. There is gonna be a few vlogs and few videos coming out on my channel. So if you are on my list, which I assume you are because you're here on the Digi Path Digest number 21, welcome everybody So the first paper today is
PSMA PET Study on Prostate Cancer
Can PSMA PET detect intratumor heterogeneity in histological PSMA expression, of primary prostate cancer? And then there were two tracers, gallium radioactive PSMA gallium and fluoride radioactive. So these are tracers that you put into the body as Like a contrast, trace, or something that's supposed to attach and then be
easy to image. And this is a group from Freiburg, Germany. So what did they do here? The PSMA is prostate specific membrane antigen, and this PET technology, positron emission tomography, is a promising candidate for non invasive characterization of prostate cancer. And this study evaluated whether those tracers, path tracers with tracers, and they use these two tracers, is capable to depict intra tumor heterogeneity of histological PSMA expression.
Why is this important? Because those cancers that express a lot of it They reacted better to treatment, so of course, they were looking for a non invasive way to image this, because pathology, even if it's slide free or, unless maybe you can do it something like really intraoperatively, but it's still, surgery is invasive, right?
So what did they do? 35 patients with biopsy proven primary prostate cancer, no evidence of metastatic disease. Or any prior interventions, they were enrolled into this this trial, this study. And then they did this PSMA PET, they did computer tomography and they did either this one marker, this gallium for 20 patients, or the fluoride for 15 patients.
And then they did the radical prostatectomy. So they had the whole prostate and then they could do all the IHC they wanted. And they also, in addition to that, they did ex vivo CT and then histopathologic, histopathological sections were prepared.
Then They manually defined these areas with different morphologies and they defined an H- score of IHC PSMA. What is an H- score? It's a score where you say, okay, how much of this tumor is affected with how strong of a expression of a marker. It, I think it can be up to 300 is the maximum score.
Anyway, there is a way of calculating this. This is a classical way of evaluating immunohistochemistry, how much of the marker is there. And this was calculated with assistance of AI, which I'm very grateful for, because then you get much more accurate results when you do that rather than estimating the age score and all these other numeric score that you're actually visually estimating and not really.
Like counting those cells even though the score is based on the number of cells or percentage of cells
. They used AI for the H- score and they. They had these prostate cancer areas with similar H- score, and then they unified in segmentation on ex vivo CT.
So they like checked, because with CT you can do the CT sections, and they checked, okay, how does it match with the IHC of the real sections? And they The calculated different metrics, right? One of them being standard standardized uptake value SUV. I love it. So the SUV and different other metrics were calculated.
They check to the agreement of the co registered tumor areas to gross tumor volume in PSMA PET as well. And what. For the results here. So results for the histological prostate cancer areas IHC PSMA expression correlated significantly to the SUV standardized uptake value. Mean and max and there was a approximately linear correlation between the age score and SUV mean value and in tumor areas larger than 400 micrometers in histology and tumor areas with strong PSMA expression show larger overlap to this gross tumor volume and in a PSMA PET after coregistration than tumor areas with very low PSMA expression.
And the tracers, those two tracers, there was no difference between the two tracers. And the conclusion is the PSMA PET with both tracers, gallium and fluoride, is able to detect changes in histological PSMA expression with prostate cancer lesion, allowing biologically targeted radiotherapy.
Yes, but does it then, they don't say that you don't need to do IHC anymore. We have a lot of papers today, so I'm just gonna keep going.
Okay.
HISTOcolAI: Collaborative Digital Histology
HISTOcolAI, call because it's collaborative.  HISTOcolAI, and this group from Valencia, and also some members from Quebec, Canada.  HISTOcolAI is an open source web platform for collaborative digital histology image annotation with AI driven predictive integration. Sounds super cool. Why does it sound super cool for me? Because doing annotations and providing pathology input for image analysis projects is notoriously Challenging because there is not really any infrastructure that can be used.
So it's notoriously challenging and it's in the image analysis world even in institutions where this is something you do every day and It's basically a string of workarounds, how to give pathologists who not always do the image analysis access to those images, to review the markup, to provide annotations for the image analysis scientists.
There is I have seen very few good workflows. That's why I'm excited about this  HISTOcolAI and it's open source. And we said the people from Valencia did this and from Quebec. They are very optimistic because they say digital pathology is now a standard component of the pathology workflow.
That would be so cool if it was standard. Not everywhere, but everywhere. In this group it is and We're getting there We're getting there and every single one of you is helping with it because you guys are here and wanting to learn about it Digital pathology is supposed to be standard but significant challenge, there is a significant challenge in developing computer aided diagnostic systems for pathology because there is lack of intuitive open source web applications for data annotations. And I couldn't agree more.
And also, there's another problem with data annotations. They disappear after the project. Although you have some companies that basically offer this as a service and I believe that they keep their annotations to, to reuse them. But most of the time no interoperability between annotations between different softwares.
And basically hundreds or thousands of work of pathologists and tissue specialists who are annotating for model development is just disappeared because of the lack of interoperability. So this paper proposes a web service and the next step for me will be to check out this web service that effectively provides a tool to visualize and annotate digitized histology images.
And it's mostly it can use different formats, but they have it working best on TIFF format. And this integration, they say, not only revolutionizes accessibility, but also democratizes the utilization of complex deep learning models for pathologists unfamiliar with such tools. Okay. I don't know how I would need to read the paper because they're talking about providing annotations and collaborating on annotations and not deploying models.
But maybe they do deploy models. So they present a use case centered on the diagnosis of spindle cell skin neoplasm involving multiple annotators. So that's what's going to be described in this paper. And they also conduct a usability study showing the feasibility of the developed tool.
Interesting. And this is open source. So my problem as non computer scientist for these open source things is when I go to GitHub, I never know what to do with this application. QuPath, at least I can download, um, maybe it's easy. So I'll check and you can check as well and let me know in an email or in the comment of this video.
Because that would be promising. You know what my video editor suggested that I do? That I do a virtual annotation session. So if you're interested in joining me for a live stream of annotations like, the gamers, they just. Play the game and you watch them play the game and they're there in the corner and then they have their gaming screen.
Something like that, but for annotations so that would require a tool for annotations and also what and images, right? So if you want to take part in something like that, let me know in the comments Just write annotations and I will know that you're interested in that and that you would like to join and you can write also the Organ that you would like to have annotation annotated if you have a specific question right now and that you're struggling with for annotations, let me know in the chat and Annotations and the organ that you want to have annotated and I'll figure out how to organize a live stream annotating session Going back to our papers, the next one is interesting.
Very interesting. It's called Enhancing It's Radiology, European Journal of Radiology. Open. And it's Enhancing Detection. Okay, perfect. We have already some annotation. Livestream request. Cancer. Okay. And if you guys have a data sets and you would just want me to access the data sets and just annotate with you live.
If that's obviously allowed for the data set that you have, let me know and we can figure it out. And that would be so cool. I think that would be super cool and then you can just have the link and always know how to annotate.
AI in Mammogram Analysis
Okay this is a group from Egypt and they did this study, the enhancing detection of previously missed non palpable breast carcinomas through AI.
And my main question when I was reading it was this question, what about the patients, but I'll tell you why in a second. To investigate the impact of AI reading digital mammograms And increasing the chance to detect the missed breast cancer. They did a study. They have AI flagged early morphology indicators overlooked by the radiologists.
And then they correlated them with missed cancer pathology types. So mammograms done between 2020 and 2023. They had a lot of of breast present those mammograms over almost 2000 mammograms. And they were analyzed in concordance with the prior one year's results 2019, 2022 assumed negative or benign were analyzed in concordance with the prior one.
Yeah. The results here were that The prior mammogram with AI marking compromised, sorry for reading things twice 54%, so 555. And in the present mammograms, AI targeted 904, 88 percent carcinomas. The description of the descriptor proportion of asymmetry was the common presentation of missed breast cancer carcinomas.
64 percent in the prior mammograms and that highest detection rate for AI was presented by distortion and then followed by grouped microcalcification. So these are radiologic features of malignancies that AI was able to detect and they predicted malignancy in previously assigned negative or benign mammograms show the sensitivity of 73, 89, sorry, specificity of 89 and accuracy of 78.
In conclusion here is reading mammograms with AI significantly enhances the detection of early cancerous changes. in dense breast tissues, and AI detection rate does not correlate with specific pathology types of breast cancer, highlighting its broad utility. The AI just detects cancer and not a specific type, so basically, it flags cancer.
Then you can do whatever diagnostic workup is necessary later. And subtle mammographic changes not corroborated by ultrasound but marked by AI, warrant further evaluation. So if AI marks something you try to evaluate it. And my question before is, what about the patients So they used those, this AI, this was a retrospective study, so my question was like, okay, so if they detected AI, with AI, They suspect cancer.
It was detected with AI what did they do with those people? Because this is recent, right? 2020 and 2023 2019 and 2022. Like what? Did they do that? They like go to the people and said, Hey actually, without a I, it was missed. You have to come back in the paper. They didn't do it. It was retrospective.
It's one of the ethical questions that comes up when using AI. And feel free to comment on this. And, this is abstract, so we would need to dive deeper in the paper.
But that's going to be something that the People who deal with AI ethics and what are the boards the ethics, not ethical boards. There is like a special board that allows people to do studies. I don't remember the name, that's going to be a question that you're going to be asking like, okay. And also in terms of, is it then, if you don't have this Tool available, or if you didn't use the tool, are you then responsible for missing something if you didn't really have the tools necessary to visualize it?
I don't have the answer, but definitely something that comes up with all those AI enhanced workflows that are not evenly distributed across the world. And, that's why ethics is a separate and bioethics is a separate discipline in life sciences. But that was basically something that came to my mind when I was reviewing this retrospective study.
So if you have any thoughts or comments, let me know and in the meantime Let's go to the next paper
Blood-Brain Barrier Organoids in Drug Development
Advanced tissue technologies of blood brain barrier organoids as high throughput toxicity readouts in drug development I like it because I'm a toxicologic pathologist So this was a Roche pharma research and shout out to Roche because Roche just became a digital pathology place Sponsor and so whenever my sponsor is published I do want to give them a shout out and this is a group from Basel, Switzerland.
And what did they do? They did those organoids. The recent advancements in engineering complex in vitro models, CIVMs, Oh my goodness. By the end of the abstract, I don't remember the abbreviations anymore, but this one we will remember because it's just BBB such as blood brain barrier, BBB organoids.
So they did these BBB organoids and they offer a promising platform for preclinical drug testing. However, Their applications in drug development, and especially for regulatory purposes of toxicity assessment, requires robust and reproducible techniques. So here they have an adapted set of orthogonal image based tissue methods, including H&E, IHC, immunohistochemistry, multiplex IF, and they have And also even MALDI, which actually that's the only instance where an abbreviation is easier than the full name, which is Matrix Assisted Laser Desorption Ionization Mass Spectrometry Imaging, MassSpec to validate those tests.
Complex in vitro models for drug toxicity assessment. Just a step back. What are these orthogonal methods? So basically you take different methods and you show with different methods the same thing. So that's something that's gonna show for the regulators that an approved method or a method that is already accepted or already, cleared or something that there is already out there widely Accepted shows the same thing that the novel thing shows.
So they had a bunch of them, hne ihc and multiple xif and even maldi and they used ai. Thank you so much for doing this because when I hear multiple markers evaluated visually I cringe because it's It's difficult, or if not impossible. And they use this to increase the throughput, first of all, and probably increase the reproducibility.
And the reliability of histomorphologic evaluation of apoptosis for in vitro toxicity studies. So apoptosis was the feature that they used AI for, and they quantified apoptosis with AI, and their data highlight the potential to integrate advanced morphology based readouts such as histological techniques and digital pathology algorithms for the use on complex in vitro models as part of standard preclinical drug development assessment.
And why is this important? There is a rule of three hours or four hours basically for toxicologic studies and a big part of those studies happens on animals. So you want to refine your study design, replace the animals and Something else. Let me look up three hours for you. On my phone.
The three R rule in toxicological studies stands for replacement, reduction and refinement. Okay, this replacement, reduction, refinement. Basically meaning as. little of those animals as possible to still have a scientifically valid conclusion. So these complex in vitro models are part of this initiative or can be part of this initiative because the more information you can get from in vitro, the more reduction or refinement and replacement you can do in the next studies which are the animal studies, right?
Super excited about that cool method and now let's start. Okay, and I have some comments that Making an annotation session could be fun And different organs you would like to just hang out for annotations. Perfect. I'll figure out how to do it I'll figure out what tools and If you are a vendor and would like to sponsor such an annotation session in your tool, I'm totally open to that.
Let's talk about that and let's do some annotations together. There are cool commercially available tools to do that, but the point of the session is going to be, okay, showing you the structures to annotate and if you have something for specific projects, let's do it.
Trustworthy AI in Lung Cancer Diagnosis
And now let's talk about implementing trust in non small cell lung cancer diagnosis with a conformalized uncertainty aware AI framework in whole slide images.
When I read this title, I'm like, what is it? Implementing trust. But basically, it's about implementing trust in those AI models with different computational metrics that, you know, I'm a pathologist, but I'll try to explain as best as I can from the abstract. Ensuring that trustworthiness is fundamental to the development of AI. I would say development, I don't think for the development, but definitely for deployment. That is considered societally responsible. Yes, particularly in cancer diagnosis. And here we go into the ethics field as well. Okay, how can we do that?
Trustworthy are those algorithms, right? And so because misdiagnosis can have dire consequence, of course But that can happen with or without AI and current digital pathology AI models lack systematic solution to address trustworthiness concerns
Yes, I totally agree with that. And there is not like one framework how to validate those.
Now the awareness of having to do that is a lot bigger than maybe in like 2012, 13, 14. And when I was starting to work with that, then you would basically publish data on your training set would be your test set and you would just have a super over trained model or image analysis solution and publish that and say, Oh, that's fantastic.
And people who did not. Know the math the statistics behind it didn't even know it was a flaw of the study, right? I was only pointed to that by my computer science and Colleagues telling me hey, tell your pathology colleagues to stop validating on your training set and I'm like what then I understood anyway, so the trustworthiness and the There are some concerns.
They are arising from model limitations and data discrepancies between model develop, deployment and development environments. This is super important. Discrepancies, deployment and development. So, let me talk a little bit about that.
So we develop, so the, if you want to build a medical device, let's say you want to build an AI model that is a medical device everybody can use. Do you have data from everybody to train on? No. So you figure out ways to make this algorithm generalizable. But. Is there a guarantee that it's so generalizable that it's gonna work on everything?
No. Development and deployment right even if you have the most generalizable model on the planet If there is data from outside your cohort, there is no guarantee.
It's gonna be The same type of data right and you trained an algorithm or model on a certain type of data. So What can we do to? Make sure that your algorithm is even an appropriate tool for the new data that's coming in. This is what those researchers did. They developed that TrueCam, a framework designed to ensure both data and model trustworthiness in non small cell lung cancer subtyping with whole slide images.
Non small cell lung cancer was basically their use case, you can probably do it for anything. And they had this TrueCam and in this  TrueCam, there are different computational things that they are doing. One of them is that spectral normalized neural Gaussian process for identifying out of scope inputs.
And this is something like, would you think this is controversial in the image analysis world? It is controversial because you have There are like two different camps. So the controversy here is okay You have a data set. Is consistency, or is it like reliability, using one algorithm for this data, for like your whole data cohort, and this is consistent, or is it consistent, or like reliable, to use the appropriate method for different parts of your data?
And I definitely am the proponent of the other method, where you need to use the tool that is appropriate for your data. And it was a bigger issue before deep learning, because deep learning is definitely more generalizable than the hard coding or thresholding or what do you call it? the handcrafted feature approach before we had deep learning where, for example, for IHC detection you would define the size of the object, the threshold of your of the DAB, of the brown stain, and basically you would manually say, okay, most of my data looks like this, this is gonna be applied.
But you will always have data that are Out of scope, they call it. Let's say a different run of IHC from the same lab that was a little fainter. What do you do? Do you run the same algorithm on it, and then you have a readout from this algorithm that does not really match the tissue?
Or do you exclude this from your data pool and then you have different methods on your data cohort? I am a proponent of different methods that give you a good result that matches the tissue. There are people who think differently, but to address this they the, our TrueCam was was developed.
So the first thing was, okay, we check which of our data are out of scope. And this is fantastic because you can already say, okay I will use this particular tool on on part of my data. This maybe has to go with a different algorithm, or maybe it has to be visually assessed by the pathologist.
Then. And the second thing, ambiguity guided elimination of tiles to filter out highly ambiguous regions, addressing the data trustworthiness. I think it's good. If it's ambiguous, why do you force a tool that you don't know if it's gonna work? And then, conformal prediction to ensure control error rates.
And then they systematically evaluated this framework across multiple large scale cancer datasets. Leveraging both task specific and foundation models. So they did it for different models, right? The narrow ones and the foundation.
To illustrate that AI model wrapped with TrueCam significantly outperforms models that lack such guidance. So you can Wrap your model in TrueCam, basically to flag the stuff that's that the that you already can predict that your model is not reliable on that. I would like to have this discussion with a computer scientist because that seems to me like a cool way of triaging that, the data that comes into your pipeline Into, okay, something has to go to pathologist, the rest can be done with AI plus pathologist, or something like that.
Let me know what you think about it in the comments. Especially my image analysis folks here on the line. And then What did we say? AML wrapped in TrueCam significantly outperforms models that lack such guidance in terms of classification, accuracy, robustness, interpretability, and data efficiency, while also improving achieving improvement improvements in fairness.
This findings highlight TrueCam as a versatile wrapper framework for digital pathology models with diverse architecture designs promoting their responsible and effective applications in real world settings. Cool! So you can basically wrap whatever you develop into this and you will know that what you develop is going to work well on a certain tile, certain data types, image types.
I think it's cool. And I see more people joining a little bit later. Let me know where you're tuning in from. And we are gonna move on to the next paper that talks about QuPath, which is open source.
QuPath Extension for Nephropathology
Cool. There's a video on the channel, how to annotate in QuPath. And it's funny also every now and then I make those videos because it's super useful tool But then I don't use the tool and before the next video I have to reteach myself how to annotate in QuPath So but now I can watch my own videos to teach myself how to annotate in QuPath And also there's one for Aperio image image scope.
So QuPath extension for glomerulosclerosis and glomerulonephritis characterization based on deep learning.
Deep learning plugins or additions to QuPath are a recent invention. This is a group from Germany, and also from, ah, Spain and Italy. Why did I not highlight them, please? Sorry for that. And what happened here? The digitalization of slides has opened new opportunities for pathology, such as application of AI.
Let's see, where was it published? In, in what? Computer, Computational Structure Biotechnology Journal? Something like that. Compute struct biotechnol J. And I didn't check the impact factors for you.
But specialized software is necessary to visualize and analyze these images. That is true. You will not open them in Windows Image Viewer. One of these applications is QuPath, Bio Image Analysis Tool, and other than the videos, there's also a podcast with QuPath participants. founder or person who started, Pete Bankhead and they propose, this study proposes GNCNN, the first open source QuPath extension specifically designed for nephropathology.
It integrates deep learning models to provide nephropathologists with accessible automatic detector and classifier of glomeruli. And glomeruli are the basic filtering units of the kidneys. And I'm like, I'm reading this and I'm like, okay, how is it useful to the nephropathologist? Depends, right? So if you want to show glomeruli to a nephropathologist, that is not that useful.
They know where the glomeruli are, but if you want to quantify or count the ones that are sclerotic and count some other ones like glomerulosclerosis and glomerulonephritis, then it's useful because you don't make them count stuff or guesstimate. You show them the markups and they say, yes, no.
Useful or not useful, but basically when it goes into quantification, that is useful. At least that's what I think as a pathologist. ' cause I know where glomeruli are in the kidney. We could annotate glomeruli, maybe we can get access to some kidney data sets and annotate in QuPath or in this other thing that we just discussed open.
What was it? HistoColA!. Where is it? Histo call ai. Ha. I remembered. Okay, QuPath, . Nephropathologists to detect glomeruli with high accuracy and categorize them as either sclerotic or non sclerotic Okay, that could be useful.
Achieving a balanced accuracy of 98. 46. Furthermore, it facilitates the classification of non sclerotic into 12 commonly diagnosed types of glomerulonephritis. No kidding, that's like super useful. I didn't know that there are 12, sorry, 12 types, but I'm also not a nephropathologist, so excuse me. My lack of knowledge in this area commonly diagnosed types of glomerulonephritis with top three balanced accuracy of 84 And this GNCNN provides real time updates of results which are available at both the Glomerulus and slide level.
Who is this? Maybe we can contact them and show it to us on the channel Checking if I know anybody no, but we can contact them. Cool. I'd like to see it live I'd like to see it In the working. So if you know anybody of the authors, send them this live stream, and maybe they're going to be interested in showing it, beyond the publication to the Digital Pathology Trailblazers.
And, if I have the bandwidth, I might reach out to them. So this allows users to complete a typical analysis task without leaving the main application of QuPath. Huh, so here I want to emphasize something. Without leaving the main application. And I think we talked about it a little bit at some point. That and I called it, Like a first world problem.
Oh a pathologist need to needs to click another window to access some results But if you multiply it by the amount of slides that the pathologist needs to Look at every day it and by the amount of movements with the wrist that they need to do To go click somewhere else and it is a hurdle. It is a digital workflow hurdle So here they said you don't need to do that, which is fantastic.
I like it and this tool is the first, to integrate the entire workflow for the assessment of GL nephritis directly into the nephropathologist workspace, accelerating and supporting their diagnosis. I would wanna learn about their work flow and work space.
How do they integrate it? Maybe I can invite them to the podcast. We'll see. We'll see. Let me know if you want that
and then another thing of annotations a lot of annotations
AI Predicts Endocrine Response in Breast Cancer
annotation free deep learning algorithm trained on H and E images predicts epithelial to mesenchymal transition phenotype and endocrine response in estrogen receptor positive breast cancer. Let's start with an epithelial to mesenchymal transition. What is that even? So you have two main types of tissue.
That is epithelial and mesenchymal. So epithelial is basically everything that's on surface from the inside or and from the outside. And mesenchymal are muscles, blood, and basically connective tissue. And tumors, they, Start this epithelial and then can transition to mesenchymal.
So they basically change their tissue type because tumors are basically cells out of control. And they do that. And it correlates with the endocrine response. That epithelial to mesenchymal transition is something visible. You can see it in the slide. So I'm excited about these type of applications because If there is something you can see and then you can detect and quantify that actually corresponds or predicts treatment molecular expression of something or something that you cannot see that's fantastic because then you can detect it in the image and you have an image biomarker, right?
So this is a group from China. Somewhere else, London, UK. And let's talk about it. So recent evidence indicates that this endocrine resistance in estrogen receptor positive breast cancer is closely correlated with phenotypic characteristics of epithelial to mesenchymal transition, EMT. EMT, is that not, it's emergency medical technician.
Not here. It's a epithelial to mesenchymal transition. Love them abbreviations. You know that. Nonetheless, identifying a tumor tissue with a mesenchymal phenotype remains challenging in clinical practice. So they validated the correlation between the transition, epithelial to mesenchymal transition status, and resistance to endocrine therapy in ER positive, estrogen receptor positive breast cancer from a transcriptomic perspective, to confirm the presence of morphological discrepancies in tumor tissue of ER plus breast cancer classified as epithelial and mesenchymal phenotypes, according to the epithelial and mesenchymal transition related transcriptional features, they trained a deep learning algorithm, and this is the architecture of efficient Net V two architecture to assign the phenotypic status for each patient.
Utilizing H and E slides. Oh, here they use the cancer genome Atlas database, TCGA, and the classifiers more accurate, accurately identified the precise phenotypic status, furthermore, we evaluated the efficacy of the classifier in predicting endocrine response.
So first they checked, okay, with image analysis they detected, is it epithelial or mesenchymal? And then they checked, does it actually correlate to this response or to this resistance in response to this endocrine therapy? And the classifier achieved a predicting accuracy of 81. 25 percent and 88. 7 percent Slides labeled as endocrine resistant were predicted as the mesenchymal phenotype.
While 75. 6 slides were labeled as sensitive, and they were epithelial phenotypes. So, Again, mesenchymal are the ones resisted. You can see mesenchymal in the image. Epithelial. or at least to the like 75 percent are the ones that are sensitive and you can see that they are epithelial. But sometimes it's difficult to visually assess.
So if you can have an AI assistant, that's great. And so their work introduces an H and E based framework comparable of accurately depicting as epithelial to mesenchymal transition phenotype and endocrine response to ER positive breast cancer. Demonstrating potential for clinical application, and benefit.
I love these applications where you have something visually identified and then you can relate it to something actionable in the in the patient care journey.
Okay, my friends, my digital pathology trailblazers, we have one more paper today. That is called, that is titled,
Comprehensive Classification of Renal Histologic Types
Leveraging Explainable AI. That ties into what I just said. If you see it, it gives you a level of explainability, at least to a pathologist, because it's a It's a feature of the tissue that is associated with some specific pathophysiology, right?
And they already know, but now if you can quantify, that's even better.
Leveraging explainable AI and large scale datasets for comprehensive classification of renal histologic types. Okay, so this is a group from Korea, and this is science reports.
So, what did they do? They also used AI in genitourinary pathology and the recent research focused on classification of renal cell carcinoma subtypes. The most recent research that we just reviewed in QuPath focused on classifying glomeruli. But also, just joking, classification renal cell carcinoma is a big thing in general, like classifying all the different types of tumors is a big thing in this image analysis community and in this digital pathology slash AI space.
And that's where the focus is. is, still, and was for a long time cancer, right? Not inflammatory disease, not so much, um, some specific metabolic diseases like liver steatosis, we discussed this one digipath digest ago, or two? Anyway, big focus on cancer, right? And the broader categorization into non neoplastic normal tissue benign tumor and malignant tumor remains understudied.
This gap in research is, can be attributed to limited availability of extensive data sets including benign tumor and normal tissue in addition to specific type of renal cell carcinoma. So basically if you only had tissue from renal carcinoma, that's what your model would be focusing about, right?
If they obviously have more different types of renal tissue and they have a model that answers a different question that classifies or figures out different things. So their research introduces a model aimed at classifying renal tissue into three primary categories, normal, non neoplastic, benign tumor, and malignant tumor.
And I'm reading this and I'm like, okay, , differentiating between these two, benign and malignant, is going to be tough. Between normal and tumor, should be okay. And that's already very useful if you want to, for example, deploy some specific model just for tumor, you will not deploy it in the normal tissue, right?
Utilizing digital pathology whole slide images from nephrectomy specimen.
, 2, 535 patients. That's a lot. That's a huge data set. Compared to the to majority of papers. And from multiple institutions, the model provides a foundational approach for distinguishing these key tissue types. The study utilizes a set of over 12, 000 whole slide images, and it's going to be 1, 300 normal tissues, 700 benign tumors, 10, 000 malignant tumors.
We have more or less comparable normal and benigh, and we have a lot of malignant tumors, so they already have a bias in their data. They have tumors, a lot of tumors. Employing the ResNet 18 architecture, and multiple instance learning approach, and the model demonstrated high accuracy with F1 scores of 0. 934. Four for normal tissue and we see benign tumors is lower oh 0.684 for B nine tumors, and we're back to higher oh 0.878 for malignant tumors. The overall performance was also notable achieving a weighted average F1 score of 0 8 7 9, and a weighted average area under the receiver operating curve of.
0. 969. So let's talk about that. What do they call the overall performance? The overall performance. So you put all those categories and if most of them are good, it's going to be high but clearly this nor sorry not normal the benign tumors are underperforming so if it was me, I would Lump all the tumors into tumor and then give it to a pathologist and because my comment to this one was cool, but what for I don't know.
I don't know. Sometimes It's useful to just develop the technology and then the use case for the technology shows up later or this can be leveraged in a different context.
I also, full disclosure, I didn't go deep into the paper. This is my opinion based on the abstract. Let me know your opinion.
Conclusion and Call to Action
Let me know also if you have any questions that are related to this live stream, unrelated to this live stream, but related to digital pathology. We're focusing on digital pathology because whenever there are enough questions to start Q& A live streams, I'll start them.
If you're interested in a live annotation live stream Let me know in the comments as well write annotations and add the organ that you're interested in regardless whether you're here live or whether you're watching their recording And if you think this thing is useful, this DigiPath Digest is useful give it a thumbs up, subscribe to the channel, that helps it get to more people that are, have similar interests to you and helps us promote digital pathology.
Thank you so much for joining. Thank you so much for staying till the end Like subscribe and see you next friday. Bye Take care. Stay warm. If you're if your place is cold.