Digital Pathology Podcast

198: AI and Multi Omics Upgrade Gastric Biopsies

Subscriber Episode Aleksandra Zuraw, DVM, PhD Episode 198

This episode is only available to subscribers.

Digital Pathology Podcast +

AI-powered summaries of the newest digital pathology and AI in healthcare papers

Send us Fan Mail

Paper Discussed in this Episode: "Transforming Gastric Biopsy Diagnostics: Integrating Omics Technologies and Artificial Intelligence" by Nasar Alwahaibi, published in the journal Biomedicines.

Episode Summary: In this episode, we explore how traditional gastric biopsies are getting a massive, sci-fi-level upgrade. For over a century, diagnostic practice has relied heavily on visual pattern recognition via histomorphology—essentially looking at stained tissue under a brightfield microscope. Today, we discuss the paradigm shift toward data-driven "precision gastroenterology," made possible by merging high-resolution multi-omics technologies with the computational power of artificial intelligence (AI).

Key Topics Covered:

The Limits of the Status Quo: Traditional microscopic evaluation is foundational but limited. It suffers from interobserver variability (human disagreement), sampling limitations, and an inability to fully capture a tumor's biological complexity or predict how a disease will progress and respond to treatment.

The Multi-Omics Revolution: Moving beyond basic static genomics to include transcriptomics, epigenomics, proteomics, and metabolomics provides a comprehensive map of cellular activity—what we call the "active construction site". We highlight a pivotal study by Kamio et al., which demonstrated that knowing a patient's specific TP53 mutation profile (such as the R175H mutation) in early-onset gastric cancer can predict a significantly longer time-to-treatment failure (17.3 months vs. 7.0 months) using oxaliplatin chemotherapy.

AI as the Medical Co-Pilot: Deep learning models and convolutional neural networks (CNNs) are transforming both endoscopy and histopathology. For example, an AI-assisted tandem study showed a reduction in gastric neoplasm miss rates from 27.3% to an incredible 6.1%. Furthermore, AI tools have demonstrated the ability to outperform human experts in objectively scoring gastritis severity. However, it is crucial to remember that AI is currently a decision-support tool that still requires human oversight, especially in complex clinical realities.

The "Endo-Histo-Omics" Paradigm: We dive into the future of integrated diagnostics, such as the HTML (Highly Trustworthy Multi-omics Learning) framework. This self-adaptive model dynamically tailors its computational architecture to prioritize the most reliable data from a specific sample's unique multi-omics and visual profile.

Real-World Roadblocks: Before this becomes the standard of care at your local clinic, the medical field must overcome four main pillars of limitations: AI hurdles (data annotation burdens, black-box models), omics constraints (high costs, tiny biopsy sizes), integration complexity (lack of standardized software frameworks), and ethical/regulatory challenges (data privacy, algorithmic bias, and accountability).

Conclusion: The traditional intuition of the pathologist is evolving as we transition toward personalized, multi-omics management. Keep questioning the data, exploring the mechanics of the science, and we will see you on the next episode!

Get the "Digital Pathology 101" FREE E-book and join us!

Um, imagine going in for a routine stomach issue. You know, you go to the clinic an

Right. Just the standard protocol.

Exactly. You undergo an endoscopy, the doctor takes a gastric biopsy, and then you just wait.

You spend days in this kind of uh suspended animation.

Yeah. Waiting for a pathologist to look at a glass slide and give you a simple yes or no answer about an ulcer or a tumor. But consider a scenario where instead of that basic answer, the doctor hands you a customized molecular blueprint. a comprehensive map of exactly what is happening in your cells,

right? And exactly how to treat it. And uh looking at your visual backdrop right now, it perfectly captures what we're talking about today.

A transition.

Yeah. Seeing that classic medical library blending into this glowing digital data stream. It really sets the scene.

It is a fitting visual, I think, for the current inflection point in diagnostic medicine.

Which brings us to the clear overview for today's deep dive. We are unpacking a fascinating 2026 review. paper published in biio medicine by Nazar Alahhibi.

It is a massive structural shift we are looking at.

It really is. The mission today is to understand how the traditional old school gastric biopsy is getting a massive sci-fi level upgrade by combining two incredibly powerful forces, multi-omics technologies and artificial intelligence.

Okay, let's unpack this because to measure the trajectory of this tech, we have to establish the baseline of the status quo, right? We are talking about the era of judging a cell by its cover.

The historical standard here is histophology. I mean for over a century the diagnostic foundation has been remarkably static.

Uh looking at things under a microscope basically.

Exactly. A tissue sample is acquired formal and fixed paraffin embedded sliced incredibly thin stained usually with a hemattoxin and eioin stain and then evaluated under a brightfield microscope.

It's an exercise in visual pattern recognition purely visual. The pathologist is looking at cellular architecture. to identify H. pylori or grade gastritis or spot cancer,

which has worked for a long time, but it's like trying to understand a really complex movie plot just by looking at a few still photographs.

That is a great analogy. You know, you can see the setting, maybe identify the main actors, but you are totally blind to the dialogue, the subtext, the musical score. You have no way to predict how the narrative is going to end.

This raises an important question regarding human limitation. The paper outlines several hard factual challenges. with this old way of doing things.

Starting with overlapping features, right?

Yes. Distinct molecular subtypes of gastric cancer can look virtually identical under a standard microscope. Then you have sampling limitations. A random biopsy might just miss the most aggressive focal point of a tumor because they're just grabbing a tiny pinch of tissue.

Exactly. But the most disruptive variable in standard practice is interobserver variability,

which just means human disagreement. Two experts are looking at the same exact slide. and disagreeing.

It happens frequently, especially in borderline cases like differentiating between severe inflammation and true early stage cancer. It provides an incomplete view of the biological complexity. It completely misses the dynamic micro environment and it fails to predict how the disease will progress and this directly connects to you the listener because diagnostic uncertainty means delayed treatments or worse missed early stage cancers.

Ambiguity translates to inefficient in patient management,

right? Maybe you get aggressive surgery when a simple endoscopic procedure would have worked or vice versa. And that friction is driving us toward the first major pillar of this upgrade, the multi-omics revolution. We are finally zooming into the molecular blueprint,

moving far beyond baseline genomics.

Yeah, I love the term omics. It's not just genomics and DNA anymore. We are talking transcrytoics for RNA, epigenomics, proteomics, lipidomics, metabolomics, and microbiomics.


Genomics gives you the static blueprint, but the multiomics approach shows you the active construction site.


That is a perfect way to put it. And the paper highlights this specific study cameo at all that really grounds this in reality. Can you break down what they found regarding early versus late onset gastric cancer?


Absolutely. The cameo study is a definitive demonstration of why just looking at morphology isn't enough. They analyze patients under 39 years old, the early onset group, versus patients over 65.


And visibly the tumors look the same.


Visually, the cellular architecture might appear identical, but when they ran deep genomic profiling, they found entirely different mutation spectra.


Specifically with the TP53 gene,


right? They found different TP53 mutation hotspots. The really crucial detail here is a specific mutation called R175H.


R175H.


Exactly. That mutation was significantly more common in the early onset cases. And here's why that matters clinically. Tumors with that specific R175H mutation showed a much greater sensitivity to a chemotherapy drug called oxyl platin.


Wow. So they could track the actual real world results of knowing that mutation was there.


They could they measured the time to treatment failure for patients with that specific mutation profile on oxyoplatin. The treatment held off the disease for 17.3 months.


17.3 months


versus just 7.0 months for patients without it.


That is an incredible aha moment. Over 10 extra months of disease control just from knowing the molecular vulnerability. It proves a biopsy isn't just about identifying canc. answer anymore. It's about knowing exactly which drug will fight it best.


It creates a highly predictive pharmacological road map. But we do need to ground this and clarify what is actually feasible today.


Right? What's in the clinic now versus what's still sci-fi.


Currently, we have targeted DNA and RNA panels and imunohistochemistry. Looking for a known actionable mutation is available today.


But the massive unargeted whole genome sequencing for every biopsy


that is still research grade or future tech. The cost are high and honestly a standard biopsy is maybe 2 to 3 millime wide.


So tiny,


very tiny. There's often just not enough physical material to run every single OMIX panel at once.


So there's a physical bottleneck. But even if we could sequence all of that instantly, human brains just aren't wired to process millions of interacting data points on a slide. Here's where it gets really interesting.


The computational engine.


Exactly. Artificial intelligence acting as the ultimate medical co-pilot. The paper talks about convolutional neural networks or CNN's being trained on thousands of endoscopic images.


They are feeding these deep learning models vast amounts of video feeds and digitized pathology slides.


And the stats are mind-blowing. The review cites a randomized control trial, a tandem study. When doctors just use standard white light endoscopy, their miss rate for gastric neoplasms was 27.3%.


Over a quarter of lesions missed.


But with the AI assisted overlay, that miss rate dropped down to an incredible 6.1%.


What's fascinating here is how AI functions across the entire diagnostic pathway. It isn't just an alarm system during the scoping.


It's analyzing the tissue afterward too.


Precisely. Take the Zang at all study mentioned in the review. They used an improved mask RCNN architecture for early gastric cancer detection.


Mask RCNN. So, it's doing pixel level segmentation.


Yes. Outlining the exact shape of the anomaly. And it achieves a 92.9% precision rate and a 94.1% F1 score


which means it's highly sensitive but also highly specific. So, it's not just flagging every random cell is cancer.


Exactly. Minimizing false positives. And then the Loud study applied an AI model to score gastritis severity, things like atrophy and intestinal metiplasia using the Kyoto gastritus score.


And how did the AI do against the humans there?


It objectively outperformed the experts in scoring the severity. It introduces reproducible standardization.


That is wild. But there is a crucial caveat in the text we need to hit. AI is a decision support tool.


That is vital to understand. It is an amazing second set of eyes that never gets tired. But perspective validation shows that when these multiclass CNN models hit the chaotic reality of a clinic,


bad lighting, mucus, weird angles.


Exactly. In those complex scenarios, they are still currently inferior to the absolute top human experts. Human oversight is still entirely required.


Okay. So, we have the multiomics zooming in on the molecules and we have AI spotting the visual patterns. What happens when we smash these two technologies together?


You get what the paper calls the endohistoics paradigm. The ultimate mashup.


It is the frontier. And the paper highlights a specific framework for this called HTML.


Highly trustworthy multiomics learning.


Yes, the HTML framework. It completely abandons the one-sizefits-all model. It uses something called self- adaptive dynamic learning,


which means it actually tailor its own architecture to a specific sample's unique multi-omics profile. If the visual image is blurry, but the RNA data is strong, it pivots to rely on the RNA. That is exactly how it works. It dynamically adjusts to prioritize the most reliable data.


So what does this all mean for the patient waiting for their results?


It drastically reduces the time to diagnosis. By automating the slide quality control, segmenting the cells, and pre- flagging the suspicious regions, the AI does the heavy lifting instantly.


There was one study in the paper where it analyzed images in seconds. Right.


47 seconds.


47 seconds. That accelerates the downstream clinical decisions and massively reduces patient anxiety. You aren't waiting days anymore.


But, and we do have to pump the brakes here.


Yeah. Let's address the real world limitations. Figure two of the paper outlines the roadblocks to the future. Making sure we aren't just blinded by techno optimism.


There are four main pillars of limitations we have to clear. First are the AI limitations.


The heavy burden of annotating the data.


Doctors have to manually label thousands of images to train these models. Plus, the models are black boxes. They lack interpretability and they carry a bias against rare lesions because they just haven't seen enough of them in the training data.


Right? And the second pillar involves OMIX limitations, high costs, the tiny amount of biopsy material we talked about earlier, and massive infrastructure demands.


Most local clinics do not have the IT infrastructure to process terabytes of genomic data.


Which leads to the third pillar, integration challenges,


the sheer biological complexity of merging visual spatial data with molecular spreadsheets is incredibly difficult, especially without standardized software frameworks.


Different hospitals use completely different file formats. And finally, the fourth pillar, ethical and regulatory hurdles,


data privacy, algorithmic bias, equity of access. Who actually gets this expensive tech?


And the major legal question, who is accountable for an AI assisted medical decision if it's wrong? Addressing these specific hurdles is the only way this moves from a fancy research tool to a standard of care at your local clinic. If we connect this to the bigger picture, we are fundamentally shifting from descriptive morphology to datadriven precision gastronenterology.


The gastric biopsy is becoming a dynamic platform for personalized management.


A launchpad for your specific therapeutics.


Exactly. Well, thank you for joining us on this deep dive today. Before you go, I want to leave you with a provocative thought that builds on what we've discussed. As AI models eventually learn to instantly link the visual shape of a cell to its hidden molecular mutations, effectively predicting the future of a disease just by looking at a standard digital slide. Will the traditional intuition and gut feeling of a veteran doctor become obsolete or will it evolve into something entirely new as they become managers of machine intelligence?


The evolution of clinical intuition is going to be fascinating to witness.


It really will be. Keep questioning the data. Keep exploring the mechanics of the science and we will see you on the next deep dive.