Digital Pathology Podcast

229: Spatial Omics and AI for Clinically Actionable Cancer Biomarkers

Subscriber Episode Aleksandra Zuraw, DVM, PhD Episode 229

This episode is only available to subscribers.

Digital Pathology Podcast +

AI-powered summaries of the newest digital pathology and AI in healthcare papers

Send us Fan Mail

Paper Discussed in this Episode:

Spatial omics and AI for clinically actionable cancer biomarkers. Reitsam NG. PLoS Med 2026; 23(4): e1005049.

Episode Summary: In this deep dive, we explore how artificial intelligence and spatial omics are fundamentally rewriting the rules of cancer diagnostics. We break down a 2026 editorial that challenges a deceptively simple question driving modern oncology: Is a tumor "positive" or "negative" for a biomarker? As targeted cancer therapies evolve, this binary thinking is failing us. We discuss why mapping where and how much of a therapeutic target exists is crucial, and how AI is stepping in to solve the reproducibility issues human pathologists face when making borderline diagnostic calls.

In This Episode, We Cover:

The Illusion of "Positive" vs. "Negative": Why the basic premise of modern cancer therapies—like antibody-drug conjugates (ADCs)—often falls apart in reality when we ignore the spatial heterogeneity of a tumor.

The Power of Computational Pathology: How AI is transforming subjective, qualitative estimates into continuous, reproducible data, scaling the quantification of complex biomarkers like PD-L1 and TROP2.

"Virtual" Proteomics: The fascinating concept of using AI models to infer high-dimensional spatial information and immune maps directly from standard, routine H&E stained slides.

The HER2 Bottleneck: A real-world look at the breast cancer drug T-DXd, which now demands pathologists distinguish between "HER2-low" and "HER2-ultralow". While human agreement drops below 70% at these fuzzy decision boundaries, AI steps up with a staggering ~97% sensitivity.

Three Shifts for the Future: Why clinical trials and routines must adopt continuous measures (like percentage of expressing cells), demand longitudinal repeat testing at disease progression, and utilize adaptive trial platforms.

Bridging the Gap to Reality: The massive hurdles preventing widespread adoption—such as equipment costs exceeding $250,000 and massive data storage needs. We discuss why a hybrid workflow that bolsters routine pathology with deployable AI is the best path forward to prevent widening global health disparities.

Key Takeaway: The future of precision oncology isn't just about finding new drug targets; it’s about fundamentally changing how we measure them. By moving away from rigid binary thresholds and using AI to map the continuous, spatial reality of tumors, we can unlock the true potential of targeted therapies. However, achieving this diagnostic ecosystem requires overcoming significant financial and systemic hurdles—such as updating reimbursement pathways and proficiency testing—to ensure these life-saving insights are accessible across all healthcare settings.

Get the "Digital Pathology 101" FREE E-book and join us!

If you've ever uh looked at a medical diagnosis, you probably have this, I don't know, this deep-seated expectation of precision.


Oh, absolutely. Like it should be engineering,


right? Exactly like engineering. You break your arm, the X-ray shows that jagged white line, and the doctor just points at it and says, "Well, there it is.


Broken or not broken, it's a clean label."


Yeah. And honestly, that is incredibly comforting. We really like our problems to be visible and neatly categorized. But then you step into the world of modern ology and pathology and suddenly um that comforting X-ray machine feels like it is completely malfunctioning.


Yeah, the landscape gets incredibly murky very fast.


It really does. So, welcome to the digital pathology podcast, Trailblazers. Today, we are throwing out the old rule book. We're exploring how modern oncology is completely rethinking the way we look at tumors.


Moving away from those outdated, comfortable yes or no labels.


Exactly. Moving toward continuous, highly detailed spatial maps. And to guide us today, we are looking at a really brilliant new paper by Nick G. Writes. It was recently published in poss medicine.


Yeah. It's titled uh spatial omics and AI for clinically actionable cancer biomarkers and it's just a fantastic piece of literature because it attacks this core problem we face in the clinic every single day.


Right? So let's set the stage for you. Right now cancer care relies heavily on this deceptively simple question. Is a tumor positive or negative for a specific biioarker?


Just a single binary label.


Yep. And that label essentially decides if you the patient get lifesaving targeted treatments. I'm talking about therapies like uh antibbody drug conjugates


or ADC's


which are fascinating. Let's actually break down what an ADC is mechanically because it really highlights why this positive or negative label is so crucial.


Go for it.


So an ADC is essentially a microscopic smart bomb. You have an antibbody which acts like a homing pigeon, right? And it is engineered to seek out a very specific protein flag, an antigen on the surface of a cancer cell. Okay.


And attached to that homing pigeon via a chemical leash that we call a linker is a highly lethal chemotherapy drug, the payload.


Okay. Let me make sure I'm tracking. So, the homing pigeon finds the flag on the cancer cell. It attaches, the cell absorbs it, and then boom, the chemical leash breaks, detonating the chemo directly inside the cancer cell.


Exactly. That is the entire promise of precision medicine. You get greater destruction exactly where the target is and significantly less collateral damage. age to your healthy tissue where the target isn't.


Makes total sense in theory.


In theory, yes. But here is the central issue we're unpacking today. That simple positive or negative assumption like the idea that a tumor simply have the flag or doesn't have the flag is fundamentally failing in clinical reality


because um tumors aren't just uniformly painted walls, right? They don't all fly the exact same flag at the exact same height.


Not at all. They are heavily heterogeneous. So when we give a targeted therapy like an ADC, the drugs effectiveness doesn't just rely on the mere presence of a target somewhere in your body,


right?


It relies on exactly where that target is expressed, at what specific volume, and in what surrounding micro environment.


And traditionally, our diagnostic approach has been well, pretty brutal, honestly. We've essentially been mashing the tissue together to find an average.


The bulk assay approach.


Yeah. Using a traditional bulk assay is like taking a blender to a complex, beautiful mosaic, pouring out the color dust and then trying to guess what the picture was. You know what colors are in the mix.


You know there's red and blue in there somewhere,


right? But you have destroyed the entire structure,


which is a massive problem because in cancer biology, the architecture is the vulnerability. When you use a bulk average-based test, you completely gloss over the underrepresented regions of the tumor,


hiding spots.


Exactly.


And those tiny specific niches, the little clusters of cells that maybe didn't get ground up perfectly in that metaphorical blender. Those are often the exact regions responsible for tissue invasion or metastatic seeding or a relapse years down the line.


Oh wow. So the blender approach actively hides the most dangerous parts of the cancer.


It really does.


But I mean if we don't average it out, if we don't just blend it up to get a baseline reading, how do we actually map something that complex in a clinical setting? I imagine pathologists don't have all day to stare at one slide and count every single cell.


No, they definitely don't. And this is where space omics comes to the rescue. The paper dives into tools like spatial transcrytoics and multiplex spatial proteomics.


Okay, spatial transcrytoics. What's the mechanism there?


So, it basically lays a microscopic grid over an intact slice of tissue. Instead of grinding the tissue up, the machine reads the RNA, which are the genetic instructions the cell is actively using, right where the cell sits on the slide.


Wow.


Yeah. It works at a single cell resolution.


So, it's geography. We are mapping the terrain instead of just taking a sensus.


Exactly. We can interrogate exactly where target expressing cells live. We can see how they interact with their immediate cellular neighborhood. And crucially, we can see if there are reservoirs of target negative cells hiding out in the tissue.


Just waiting to see a recurrence after the therapy successfully kills off all the positive cells.


You got it. Writes paper actually highlights some fascinating specifics here regarding recent spatial atlases of brain tumors both in adults and children.


Right. They looked at antigens that we actually try to target with drugs, right?


Yes. You'll hear the alphabet soup in the literature. Um, things like B7H3, EGFR, or IL13R A2.


But for our trailblazers listening, just think of these as different types of molecular antennas sitting on the outside of the cancer cells.


A perfect way to visualize it. And the spatial maps reveal that these antennas might only exist in a tiny isolated subpopul of the tumor cells,


which you totally miss in a blender


completely. And If we connect this to the bigger picture, it's not just about finding those antennas on the cancer cells themselves. It's about the physical interactions between the cancer cells, the immune cells trying to fight them, and the stromal cells,


which are basically the structural support cells of the tissue. Right.


Yes. And depending on how those cells are physically arranged in space, those interactions can either help your immune system attack the tumor or


or they can build a literal fortress,


right? A physical and chemical fortress that helps the tumor evade the attack entirely.


Okay, so we now have the biological technology to create these incredibly complex, beautiful spatial maps. We can literally see the fortress, but reading the paper that seems to lead us straight into a massive logistical wall.


The data bottleneck.


Yeah. Generating one beautiful map in a heavily funded research lab is one thing, but how on earth do we make this data reproducible and scalable across different community clinics, different hospitals, and over, you know, different time frames?


That is what we call the pre-analytical problem. And it is a logistical beast. Before a slide even gets placed under a microscope, it has to be physically prepared. The tissue has to be fixed in chemicals to stop it from degrading. It has to be stained with specific dyes so we can see the structures and has to be scanned into a digital format.


And every single one of those manual steps introduces variation.


Every single one.


Right. Because a lab in New York might leave the tissue in the fixing chemical


for an hour longer than a lab in London.


Exactly.


Or different digital scanners. might interpret the color purple slightly differently.


Like how two different TV screens look different in a living room.


Perfect analogy. Add to that the human element. The interobserver variability is huge.


You mean two doctors disagreeing?


Yeah. Two brilliant, highly trained pathologists might look at the exact same stained slide and give two slightly different categorical scores. It's not a lack of skill. It's just the fundamental limit of human subjectivity when you're trying to quantify thousands of microscopic data points by eye. And this brings us to artificial intelligence and computational pathology.


Yes, the computational cure.


AI isn't just coming in to do the same job a human does, just faster, right? It steps in to estimate continuous measures. Instead of a human squinting and saying, "Ah, this looks like a category 2 positive."


The AI says exactly 43.2% of these cells are positive, and here is the exact gradient of intensity across the entire tissue section.


It replaces subjective chunky cate ories with scalable quantitative data and we're already seeing this roll out in the clinic.


We are the paper notes how quantitative computational scoring is currently being used for the PDL1 biioarker in non small cell lung cancer


and PDL1 is that protein that basically acts as a stop sign for the immune system telling it not to attack.


Exactly. So measuring exactly how many stop signs are present is crucial for deciding if amunotherapy will actually work.


Another great example from the sources is TR O P2. which is another target for those ADC smart bombs we talked about earlier.


TRP2 is notoriously tricky.


Yeah, the physical staining for it is heavily heterogeneous. It's incredibly patchy. And furthermore, the stain mostly localizes right on the cell membrane, the ultra thin outer skin of the cell.


Right. Rather than lighting up the whole center of the cell.


Exactly. So using traditional human estimated cut offs to judge a faint ring of color around a patchy cluster of cells. I mean, that's an incredibly fragile diagnostic method.


It's genuinely fair to ask a human visual system to accurately and consistently quantify thousands of faint membrane stains across a massive tissue slide


day in and day out without fatigue.


Right? AI algorithms simply don't get eye strain. They look at pixel intensity mathematically.


But um I want to push back or at least clarify something here because this is where a lot of professionals get nervous. Are we just using AI as a hyperfast calculator to count cells or is the AI actually extracting biological meaning that a human pathologist simply cannot see no matter how much time they have.


What's fascinating here is that it's absolutely the latter. AI is seeing patterns humans fundamentally cannot process. And that brings us to the bleeding edge of this entire field.


Virtual omics.


Virtual omics.


Okay. Break the mechanics of this down for me. What is it?


AI models are now learning to infer highdimensional hidden molecular information directly from routine everyday H& stained slides.


H& that stands for hematoxilin and eosin. For any trailblazers who haven't looked through a microscope recently, these are the standard pink and purple slides that have been the absolute bedrock of pathology for over a hundred years.


They really have.


The hemattoxylin dye binds to DNA and turns the cell nucleus purple. And the eosin dye binds to proteins and turns the rest of the cell body pink.


That's the basic chemistry. Yes. But to a human, it's just pink and purple shapes.


But these AI models are being trained on paired data sets.


Okay. Meaning What


meaning? The computer looks at the standard pink and purple H& slide and then it looks at the highly complex, massively expensive molecular data for that exact same piece of tissue and over millions of iterations, the AI learns the hidden mathematical associations.


Wait, so the AI realizes that a specific microscopic change in the texture of the purple nucleus combined with the slight change in the geometry of the pink space between the cells things a human eye just perceives as a blur actually mathematically correlates with a specific genetic mutation


precisely. It predicts the underlying transcripttoic or proteomic reality just from the physical shapes and textures on the slide. It predicts the unseen.


That's wild.


It is. The paper actually highlights an incredible recent study in lung cancer where this was put to the test. The AI combined standard H& morphology with virtual spatial proteomics to predict whether a patient would respond to amunotherapy.


And the results were staggering. It outperformed the clinically established biomarkers. It did.


The AI's prediction off a cheap pink and purple slide was actually better at predicting patient survival than doing a costly physical test for PDL1 expression or measuring the tumor mutational burden,


which involves physically counting how many genetic mutations the tumor has. It is a total paradigm shift. We are extracting $100,000 worth of molecular insight from a $3 piece of glass.


But this raises a massive interpretive dilemma, and I really want to zero in on this because it's a fascinating clinical grade. area mentioned in Wright's paper. The sources note that the actual statistical correlation between these virtual AI predictions and the true physical protein expression in the tissue is often only moderate or even weak.


Yeah, that's the catch. The morphology based approaches have ceilings. If a protein marker doesn't physically alter the shape or texture of the cell in a way the AI can mathematically detect, the prediction of the specific protein level falls apart.


So, here is the clinical dilemma for you trailblazers. Let's say an AI looks at a standard H& slide and accurately predicts that you will survive or that you will respond beautifully to a specific drug.


The math says it works,


right? But when we actually grind up the tissue and physically test it, the biological mechanism, the protein the AI said was driving this isn't actually there at the levels the AI predicted.


The prediction is right, but the biological math is wrong.


Exactly. Should we trust it? Do we make lifealtering treatment decisions based on a black box that gets the clinical outcome right, but the biological mechanism wrong.


It forces the clinical world to ask what we value more, mechanistic truth or predictive power. If the virtual assay accurately predicts clinical benefit, which is what you and your doctor actually care about, should it be integrated into the workflow, even if we don't fully understand how it arrived at the answer?


It's a tough call.


It is. The paper heavily suggests that while this predictive power is incredibly promising, it requires deep critical validation. We cannot blindly abandon biological reality just because an algorithm boasts a high survival prediction rate.


We ultimately need to know why a treatment works. Otherwise, we're flying blind when the AI eventually makes a mistake.


Absolutely.


But, you know, to ground this whole discussion in immediate clinical reality, we don't even have to look far into the future. We can look at breast cancer right now. Therapeutic advances are actively breaking our existing human-driven biomarker frameworks today.


Oh, this is playing out right now with the HO2 reclassification crisis.


Let's look at the Destiny Breo trial. This is a perfect example of pharmaceutical technology outpacing human diagnostic capability.


It was a massive phase 3 trial involving that ADC smart bomb we discussed earlier, specifically one called trstuzumab durstikin or tdxd.


Okay.


Historically, the HR2 protein was treated as a strict binary. Your tumor either had a massive amount of HR2 antennas, meaning you were HR2 positive and got the drug, or you didn't, meaning you were HR2 negative and got nothing. But TDX is such an effective smart bomb and its chemical leash is designed so perfectly that it has something called a bystander effect.


Yes.


When it detonates inside a cell with a HR2 flag, the chemotherapy leaks out and kills the neighboring cancer cells too, even if they don't have the flag. And because of this, the drug worked for patients classified as HR2 low.


And the data went even further. The trial showed exploratory but directionally consistent benefits for patients with HR2 ultra low disease,


meaning they barely had any flags at all, but the drug still found a way to work.


Exactly. Suddenly, the binary is completely destroyed. Pathologists are no longer just looking for a clear positive or negative. They are forced to distinguish between a cell being HR2, meaning absolutely no flags, and HR2 ultralow, meaning maybe a tiny faint dusting of flags,


which is asking a human eye to do the biologically impossible.


It really is.


Think of it like this. It's like asking you to distinguish between 50 shades of gray with your naked eye under varying lighting conditions in different rooms versus just using a calibrated digital light sensor.


And the clinical data backs up that analogy perfectly. At these critical low expression thresholds, the boundary between zero and ultra low human agreement drops to 70% or lower.


Wow.


Yeah. Nearly a third of the time, leading experts disagree on a classification. And that single human disagreement determines whether or not a patient receives a highly effective lifep prolonging drug.


That is a diagnostic coin flip you do not want to be on the wrong side of


definitely not


but what happens when you point AI at this specific physical problem


when we use AI based iminohistochemistry analysis


it has demonstrated a pulled sensitivity of about 97% for identifying patients eligible for TDXD


it only misses about 3% of eligible cases


exactly the human visual limit is clear and AI precision easily surpasses it at these borderline decision boundaries because it is counting pixels not visual estimates.


So what does this actually mean for the clinicians sitting at the desk trying to treat you? Ratesom paints a picture of the future pathology report and it is fundamentally different. Instead of a piece of paper that just says HR2 low in bold font, the clinician receives a comprehensive spatial phenotype.


The report of the future reads like a meteorological weather map. Instead of a single word, it says um 15% of tumor cells show weak but mathematically detectable expression. Further, more these specific cells are clustered in a stromalrich immune excluded region of the tumor.


That is continuous data. It doesn't just tell the doctor the target is physically there. It tells them if the physical drug can actually reach the tumor


or if it's going to be blocked by that physical stroal fortress,


right? It tells them if the immune cells have access. It tells a complete dynamic biological story.


Okay, but wait, if continuous spatial AI tracking is the undisputed future of oncology, we have to talk about the massive elephant in the room. Cost inequity.


Cost inequity. Because right now you're describing a future where we need gigabytes of data and supercomputers just to read a single tissue slide.


Yeah.


How on earth does a rural community hospital afford that?


The equity challenge is brutal. High-cost spatial platforms often exceed a quarter of a million dollars, $250,000 per single instrument. And that makes sense when you look at the mechanics.


Yeah. These machines use complex cyclic amorllorescents or barcoded molecular probes. It can take days just to run one fly through the machine.


And beyond the hardware, you need massive server infrastructure just to hold the sheer volume of data generated. You need dedicated biioinformaticians on staff to translate the raw data into something a doctor can actually read.


For low and middle inome countries or even just regional community hospitals, this is entirely inaccessible. It threatens to widen health disparities on a global scale if we aren't careful.


We can't just invent life-saving diagnostic technology and then shrug and say, "Well, good luck affording it." So, what does The practical solution here writes actually advocates for a hybrid workflow.


He outlines a very pragmatic two-pronged approach. First, we must aggressively deploy robust, lowcost AI assays built on routine H& pathology for the masses,


right?


If an AI can look at a cheap $3 widely available slide and give a highly accurate quantitative readout of the tumor micro environment, that democratizes precision medicine immediately.


We use the AI to pull maximum diagnostic value out of the cheapest most standard test available.


Exactly. Then on the other hand, we reserve the incredibly expensive highdimensional spatial single cell omics for specialized academic centers and massive clinical trials.


We use a big $250,000 machines to calibrate the AI to track tumor evolution deeply over time and to discover the next generation of drug targets.


It's the only way it scales.


Technology and workflows are one thing, but what has to actually change on an administrative level to make this a reality? Because science moves moves at the speed of light, but healthc care policy famously moves at the speed of a glacier.


The policy updates needed are massive. First, we need established reimbursement pathways for these AI assays. Right now, billing codes are built for human labor.


Interesting.


Yeah. If an algorithm provides clinical benefit and saves a patient from receiving the wrong toxic drug, the health care system needs a mechanism to actually pay for that algorithmic analysis.


That makes a lot of sense. What else?


We need regulatory framework. works for repeat testing because tumors evolve. A molecular test done at the day of diagnosis shouldn't be the final word 3 years later when the cancer relapses.


Totally.


We also need multic-enter ring trials which are essentially massive proficiency tests across different global labs to ensure that an AI reads a slide the exact same way in Berlin as it does in Boston or Mumbai.


And most importantly, equity guard rails.


We need access benchmarks actively built into the rollout of these technologies to prevent the widen of health disparities.


Ultimately, the goal isn't just to use AI because it's novel and exciting. The goal is to build a diagnostic ecosystem where highly resolved spatial biology informs what we need to measure in the tumor and AI determines how to measure it reliably and continuously.


And that perfectly brings us to our summary of this incredible shift in modern oncology. We are moving away from static binary labels, the simple yes or no, the broken or not broken X-ray mentality, and moving toward continue continuous living maps of cancer ecosystems.


It's a fundamental change in how we perceive the disease.


It changes the entire paradigm and it leaves us with a final provocative thought that builds on everything rightome calls for regarding continuous and adaptive trial platforms.


Yeah, let's hear it.


If we accept that a tumor is not a static object but a constantly shifting heterogeneous ecosystem and if we now have the AI tools to track it continuously and non-invasively by analyzing routine slides and virtual omics over time, right?


When does the entire concept of the baseline snapshot biopsy become completely obsolete in modern medicine.


Wow. If the disease is a future link movie, why are we basing our entire lifealtering treatment plan on a single Polaroid taken on day one?


That is the ultimate question we have to answer next.


It really is.


And that brings us right back to where we started. The jagged white line on the X-ray is comforting, but it's just a snapshot in time. Biology doesn't stand still, and thankfully neither does our technology. Thank you for joining this discussion on the digital pathology podcast, Trailblazers. Keep pushing boundaries, keep questioning the standard protocols, and we'll catch you on the next one.