Digital Pathology Podcast
Digital Pathology Podcast
225: Artificial Intelligence in Oral Oncology: Diagnosis and Therapeutic Integration
This episode is only available to subscribers.
Digital Pathology Podcast +
AI-powered summaries of the newest digital pathology and AI in healthcare papersPaper Discussed in this Episode: Artificial intelligence in oral oncology: Current advances and future potential in diagnosis, prognosis, and therapeutic decision-making. Annamalai A, Dhanes V, Jayalakshmi L, Shanmugam R, Ravi S. Cancer Treatment and Research Communications 47 (2026) 101193.
Episode Summary: In this journal club deep dive, we explore how AI is fundamentally reshaping the clinical management of Oral Squamous Cell Carcinoma (OSCC). We examine a comprehensive March 2026 study that confronts a frustrating paradox: despite the oral cavity being visible to the naked eye, OSCC survival rates have stagnated due to late-stage diagnosis and complex tumor biology. This episode breaks down how algorithms are moving oncology from a reactive discipline to a highly predictive, personalized science.
In This Episode, We Cover:
• The OSCC Paradox: Why relying on traditional visual inspection and standard TNM staging ignores biological heterogeneity, and how AI steps in where the naked eye and basic anatomy fall short.
• Pocket Pathologists: The revolutionary use of Convolutional Neural Networks (CNNs) in smartphone apps and portable devices, achieving up to 82% to 92% sensitivity for point-of-care screening in resource-constrained settings.
• The Committee of Algorithms: How AI acts as a "multimodal synthesizer," fusing radiomics (tumor texture), histopathology (tumor-infiltrating lymphocytes), genomics, and Natural Language Processing (NLP) of unstructured clinical notes to predict individualized risk.
• Real-Time Margin Guidance: How AI combined with fluorescent imaging provides surgical margin feedback to surgeons in the operating room in under five minutes with over 85% concordance with expert histopathologists.
• Digital Twins: The sci-fi reality of running virtual clinical trials. We discuss how AI uses reinforcement learning to build simulated patient copies, allowing tumor boards to predict radiotherapy outcomes and drug toxicities before treating the physical person.
• The Black Box, Bias, and the Fix: The major roadblocks preventing immediate clinical rollout. We discuss opaque decision-making and training data bias (which can drop accuracy by over 15% in underrepresented groups). We also explore the solutions: Explainable AI (Grad-CAM heat maps) to visualize decision logic, and Federated Learning (privacy-preserving decentralized training) to eliminate data sharing hurdles.
Key Takeaway: The true value of AI in oral oncology isn't in replacing human clinicians, but in digesting massive multi-omics data that no single human could synthesize alone. By acting as a transparent, explainable support tool, AI is setting the stage for a future where tomorrow's healthcare professional might spend as much time treating a virtual patient as the physical one sitting in the chair
Welcome back to the digital pathology podcast to all the trailblazers tuning in today. We've got a really exciting journal club style deep dive lined up for you.
Yeah, we are diving into some pretty incredible territory today. It's uh it's a paper that really caught our eye.
Exactly. We're looking at a paper titled artificial intelligence in oral oncology. Current advances and future potential in diagnosis, prognosis, and therapeutic decision-making.
Right. Published by Aravventh Animal and his colleagues.
Yeah. And the journal Cancer Treatment and Research Communications. This just came out. in March of 2026.
And you know, the mission of this deep dive is to really explore how AI is just fundamentally reshaping the clinical management of oral squamous cell carcinoma or uh OCC
because it's a disease where honestly the 5-year survival rates have just frustratingly stagnated. I mean, despite the oral cavity being right there, visible to the naked eye.
It's a huge paradox in the field. We can see it. But latestage diagnosis and well, complex tumor biology means and survival rates just aren't improving the way we'd want.
Right. Okay, let's unpack this because this isn't just about algorithms and like robots taking over the clinic. It is about the future of personalized patient care.
Absolutely. It's about giving clinicians a completely new set of tools.
So, starting at the very beginning of the patient journey, right,
the diagnostic front line, we have to move beyond the naked eye because catching the disease too late is the symptom of relying on traditional visual inspection.
Yeah. The naked eye is Well, it's subjective and this paper really digs into how deep learning models are changing that
specifically. They're talking about convolutional neural networks or CNN's things like uh mobile net and com next
right and these models are achieving really high accuracy. The paper notes an ARLCC of up to863 in distinguishing OCC from oral potentially malignant disorders or OPMDs
which is I mean 863 is incredibly high for this kind of distinction.
It really is. And they aren't just running these in massive server farms. They're integrating these CNN's into smartphone apps and portable autofllororesence devices,
right, for point of care screening, especially in resource constrained settings, which is huge. I think the paper said they were boasting sensitivities between like 82% and 92%.
Yeah. 82 to 92% sensitivity just from a mobile device.
It's almost like having a pocket pathologist, right?
It is. Yeah.
But okay, I do have to push back a little here because it sounds great on paper, but if you take this into the real world. Say the lighting in a rural community clinic is just terrible or the smartphone camera lens is smudged. Doesn't this highly trained AI just kind of fail?
Well, yeah, that is a completely valid concern, and honestly, it's one the authors acknowledge. Real-world image variability is exactly why these models experience performance drops when you take them out of the lab.
Right. Because the lab is perfect.
Exactly. In the lab, you have perfect lighting and curated clinical photos. But this is why algorithms must be trained on diverse real-world conditions.
Messy data,
right? Messy data. If you only train it on pristine photos, it fails when it sees a shadow. It has to learn what a smudge looks like or what bad fluorescent clinic lighting looks like to still find the tumor.
That makes total sense. So, moving on from just identifying the cancer, we shift to predicting its behavior.
Right. Prognosis. Because knowing a patient has OCC is really only step one.
Exactly. Understanding how aggressive that specific tumor is going to be means moving way beyond traditional staging. Traditional TNM staging, right? Tumor node metastasis.
Yeah, TNM staging is anatomically useful. Of course, we need it, but it totally ignores biological heterogeneity. Two tumors might look exactly the same size anatomically, but behave completely differently,
right? So, AI kind of acts as this multimodal synthesizer or like a crystal ball fusing all these different data streams.
It really does. It takes radiomics, you know, the tumor shape and texture from CT or MRI scans,
the stuff the human eye can't even quantify. Exactly. And it fuses that with hystopathology. So it's quantifying things like tumor-infiltrating lymphocytes or TILs on the biopsy slides.
Oh wow. And then adding genomics on top of that, right? Like TP53 and EGFR mutations.
Yep. So it's pulling all of those together. But then it's also using natural language processing NLP to mine unstructured electronic health records for hidden prognostic flags.
Wait, reading the doctor's actual notes?
Yeah, the unstructured clinical narrative.
Okay, but this is where I'm a bit skeptical because it's like trying to predict the weather by only looking at the clouds, which is traditional TNM staging, versus having satellite data, wind speed, and historical climate patterns all at once. That sounds amazing.
It is.
But the paper notes that with these NLP speech recognition tools, they currently show massive variability, like error rates ranging from 8.7% to sometimes over 50%.
Yeah, the error rate can be high because clinical language is messy.
So, with error rates occasionally topping 50%, Are we really risking a patient's prognosis on misunderstood clinical notes? Like the AI reads no evidence of disease as evidence of disease.
Right. And if we connect this to the bigger picture, this is exactly why AI cannot operate in a vacuum. You would never rely solely on the NLP data.
Okay. So, it's a checks and balances thing.
Precisely. The true power here lies in ensemble approaches. The system cross references that potentially flawed NLP text data with the hard genomic and radiomic data.
Oh, I see.
Yeah. So, if the text says something alarming, but the TILS and the TP53 status say it's low risk. The model weighs the hard data more heavily to create a robust individualized risk profile.
That makes a lot more sense. It's not just one algorithm making the call. It's a committee of algorithms.
Exactly. An ensemble.
Okay. So, now that the AI has diagnosed the tumor and predicted its path, how does this data actually change what happens in the hospital like in the operating room or the radiotherapy suite? We're talking about treatment precision now,
right? The actual clinical Well, in the O AI combined with fluorescent imaging is actually guiding surgical margins in real time.
Real time margin guidance.
Yeah. They're using CNN based analysis of frozen sections and it provides feedback to the surgeon in under five minutes.
Under five minutes. That's incredible. Usually the surgeon sends it to pathology and you're just waiting while the patient's open on the table.
Exactly. And this AI analysis has over 85% concordance with expert hystopathologists. So, it's fast and it's highly accurate. And what about in radiotherapy?
In radiotherapy, AI is auto segmenting tumors on CT scans. And it's even correcting dental artifacts.
Oh, right. Because metal fillings and implants create those crazy starburst flares on a CT scan.
Yeah. They scatter the image, making it super hard to see where the tumor actually is or where the healthy tissue is. The AI cleans that up so we can protect the healthy tissue.
That is huge for oral oncology. But, okay, the paper also brings up something that genuinely sounds like sci-fi. Virtual patients. They call them digital twins.
Digital twins. Yes.
Hey, a digital twin. Are we actually running virtual clinical trials on a simulated copy of the patient before treating the physical person?
That is exactly what we are moving toward. Yes.
How much computing power does that even take? That sounds impossible for a regular hospital.
Well, what's fascinating here is how the AI builds it. It uses reinforcement learning to make these digital twins possible. It integrates a patient's longitudinal data and it allows tumor boards to actually simulate different drug toxicities or radiotherapy outcomes.
So they can say if we give this dose the twins kidneys fail so let's lower it for the real patient
essentially. Yes. It predicts how the tumor and the healthy tissue will respond and the predictive accuracy they are seeing exceeds 85%.
Wow.
But to your point about computing power you are spoton. While it's incredibly exciting the real-time data fusion required makes it incredibly difficult to implement in standard hospitals today. It is very much on the cutting edge,
right? It's not something you're going to find in your local community clinic tomorrow. Not yet.
Which brings us to a much needed reality check. Because if we have 5minute surgical margins and digital twins that are over 85% accurate, why isn't every oncology ward in the world utilizing these tools today,
right? The elephant in the room.
There's a catch.
There are a few actually. The biggest one is the blackbox problem.
The blackbox meaning we don't know how the AI is thinking.
Exactly. Clinicians hesitate to trust deep learning models, especially CNN's, because they lack transparent decision logic. The AI says this is cancer or cut this margin, but it doesn't show its work.
And as a doctor, you can't just blindly follow a machine if you don't know why it's telling you to cut away someone's tissue.
You'd be risking your license and more importantly, the patient's life. And then there's the issue of bias,
right? The training data.
Yeah. Models trained on skewed data experience accuracy drops of over 15% and underrepresented demographic groups
over 15%. That is a massive drop.
It is. If your model was mostly trained on one demographic and a patient from an underrepresented group walks in, the AI might just get it wrong. This risks drastically exacerbating healthcare inequities.
So, we have an opaque box that might be biased. How do we fix that? What are the solutions out there?
The field is leaning heavily into explainable AI or XAI.
XAI. Okay. What does that look like in practice?
It looks like tools such as Grad Cam and SAPE. Basically, these tools create visual heat maps. They overlay a color gradient on the image to show clinicians exactly why the AI made a choice.
Oh, so it highlights the cluster of cells that thinks are malignant.
Exactly. It says, I think this is cancer and here are the specific pixels that led me to that conclusion. It makes it transparent.
Okay, so that solves the blackbox part. But what about the bias? How do we train the AI on a diverse global population without, you know, violating patient privacy laws by sharing medical records everywhere.
That is where federated learning comes in.
Okay, here's where it gets really interesting. Federated learning, because the way I understand it, it's like a bunch of master chefs sharing a recipe, right?
It's a great way to think about it.
Like they want to improve the ultimate recipe together, but none of them want to reveal the secret ingredients in their own kitchens.
So instead of sending their secret ingredients to a central kitchen, they just cook it locally, figure out what works, and only send the updated instructions back to the group. Exactly. In federated learning, the algorithm goes to the hospital, trains on their secure local patient data, and then only the updated mathematical weights. The learnings are sent back to the central global model.
So, no patient data ever leaves the hospital.
None. It preserves patient privacy across institutions while improving the global AI model on truly diverse demographics.
That is brilliant. It completely bypasses the privacy hurdle.
It does. And addressing algorithmic bias impartially like this is just essential for equitable care. AI has to shift from being this opaque bias oracle to a transparent explainable tool.
One that actually supports the critical thinking of the multiddisiplinary tumor board rather than trying to replace them.
Absolutely. The human element is still at the center of all this
which brings it all back to you the trailblazers listening today. Everything we've discussed from this paper by animal and his team shows that AI and oral oncology is transitioning the field. We're moving from a purely reactive discipline to a predictive, highly personalized one.
Yeah. The true value of AI isn't in replacing the clinician. It's in digesting the massive multi-omics data that no single human could possibly synthesize alone.
So, what does this all mean for you?
It means a shift in how you practice,
right? And I want to leave you with a final thought to really mull over as we wrap up. Let's say federated learning and digital twins do become the global standard, which looks likely. Tomorrow's healthcare professional might spend as much time treating the virtual patient as the physical one sitting in the chair,
which is a wild paradigm shift.
But imagine this scenario. An AI perfectly predicts a recurrence for your patient. It tells you exactly what's going to happen.
But the explanable AI, the Grad Cam heat map we talked about, it glitches. It fails to show you why.
Oh wow.
Do you trust the black box to save a life knowing its statistical track record? Or do you ignore it to protect your medical liability because you can't see the logic? It's a heavy question for the future of medicine. Keep blazing those trails everyone. We will catch you on the next deep dive.