Digital Pathology Podcast
Digital Pathology Podcast
197: Optical Biopsies in Gynecologic Oncology surgery
This episode is only available to subscribers.
Digital Pathology Podcast +
AI-powered summaries of the newest digital pathology and AI in healthcare papersPaper Discussed in this Episode:
Episode Summary:
In this journal club deep dive, we explore a groundbreaking 2026 systematic review that challenges the traditional intraoperative frozen section. We examine how hyperspectral and multispectral imaging are fundamentally reshaping the operating room by giving surgeons real-time, molecular-level vision. What happens when we can see beyond the visible spectrum, and how do we navigate the philosophical boundary between human surgical intuition and artificial intelligence?
In This Episode, We Cover:
• The End of the "Frozen Section" Waiting Game: Why current intraoperative pathology wastes precious surgical time and how "optical biopsies" provide cellular-level insight without the need for tissue contact, contrast agents, or freezing.
• The Science of the Spectral Fingerprint: Moving beyond standard RGB monitors that limit what surgeons can see. How malignant tissues interact with light—through refraction, scattering, absorption, and fluorescence—to create unique optical signatures that our naked eyes completely miss.
• Entering the Hypercube: How the 3D data sets of spectral imaging are captured: ◦ Spatial & Spectral Scanning: High-resolution methods that unfortunately struggle with breathing patients, making them susceptible to motion artifacts. ◦ Snapshot Technology: The real-time, video-rate method that balances spatial and spectral resolution for live clinical use.
• Clinical Showdowns - Cervical and Ovarian Cancer: ◦ Cervical Neoplasia: How multispectral imaging tracks the dynamic whitening of tissue following acetic acid application, plummeting false-diagnostic rates to an astonishing 1.7% compared to the 20-24% error rates of traditional methods. ◦ Ovarian Cancer: The massive hurdle of surgical blood acting as an "optical sink" that confuses sensors by causing spatial heterogeneity, and how mathematical normalization techniques correct these specific errors. ◦ The Falloposcope: A look at miniaturized technology safely navigating the fallopian tubes, combining optical coherence tomography (OCT) and multispectral imaging to detect early-stage cancers right where they originate.
• The "Black Box" and Spurious Correlations: Why feeding complex hypercube data into AI models (like CNNs and Random Forests) can be dangerous if the data sets are unbalanced. If an algorithm learns to diagnose cancer based on a spurious correlation like the glare of an OR light rather than actual biomolecular tumor markers, it will fail in new environments. We discuss the absolute necessity of Explainable AI (XAI) so surgeons can trust the biological plausibility of the machine's decisions.
Key Takeaway: The integration of hyperspectral and multispectral imaging serves as a real-time optical biopsy, offering incredible sensitivity for detecting malignancies. By pairing these tools with transparent, explainable AI, we are standing on the precipice of a new era that will drastically improve patient outcomes and force us to redefine the future of surgical intuition
Welcome back trailblazers. Uh you are listening to the digital pathology podcast.
Glad to be here for this one.
Yeah. And we know you are all health care professionals you know constantly pushing the boundaries of what is actually possible in medicine. So today we're doing something a little bit special.
Right. We are doing a journal club style deep dive.
Exactly. A deep dive into a piece of literature that is quite frankly going to fundamentally reshape the future of the operating room.
It really is.
Our mission today is to unpack a highly relevant um a systematic review published recently on February 20th, 2026. This is in the journal MDPI diagnostics
authored by Kiara Enenzi, Matteo Pavone, Barbara Celiger and a really incredible international team.
Yeah, it's a massive team effort and the topic we are exploring today is that leap from standard imageguided surgery to real time computer assisted diagnosis using hyperspectral and multisspectral imaging specifically in gynecologic oncology
and And it is a remarkable synthesis of where we currently stand with this technology because the authors they screen hundreds of articles to evaluate this exact transition.
And for the trailblazers listening, the operational why here is just incredibly clear. I mean we are all intimately familiar with the frustrations of the current gold standard,
the interoperative frozen section.
Yes, the frozen section. You are in the middle of a procedure, you resect a margin, you send the sample down to pathology, and then you just well You stand there and wait.
You're just watching the clock.
Literally, it eats up precious operative time. It requires really meticulous specimen preparation. And even then, uh, we know it has suboptimal accuracy,
especially when dealing with really complex margin assessments,
right? Or sentininal lymph node evaluations. We desperately need something faster, you know, something that provides cellular level insight while the tissue is actually still inside the patient.
Okay, let's unpack this.
Let's do it.
Let's start with the fundamental science of how are moving beyond standard surgical vision because we are all used to the limitations of standard RGB laparoscopic feeds,
the red, green, and blue,
right? They only show us those RGB wavelengths which completely restricts us to what the human eye can already see anyway.
And that limitation is the bottleneck of modern surgery because biological tissue interacts with light in ways that our eyes and our standard OR monitors just completely miss.
They don't capture the full picture at all.
Not even close. When photons hit by biological tissue. Four distinct photos processes happen simultaneously. You have refraction, scattering, absorption and fluoresence.
Okay. So, four different things happening to the light all at once.
Exactly. And these interactions alter the speed, the direction and the spectral composition of the light bouncing back. And here is the key. Malignant tissues possess radically different morphological and biochemical properties compared to healthy tissues
because they're structured differently at a cellular level.
Right? They have disrupted collagen. structures, dense and chaotic vascularization, altered cellular metabolism. And because of these structural differences, tumors interact with light differently.
They leave a different signature.
They create a highly specific spectral fingerprint. And this fingerprint extends across the entire electromagnetic spectrum from the ultraviolet range all the way through the near infrared.
So, we capture this spectral fingerprint. But how does that actually translate to a scream in the O? Because I saw the term hyper cube repeated. referenced in the paper.
Yes, the hyper cube.
What exactly are we looking at when we talk about a hyper cube?
Think of it as a massive three-dimensional data set. So, you have your two standard spatial dimensions, right? The X and the Y-axis,
like a normal photograph.
Exactly. Representing the physical layout of the tissue. But then you add a third dimension, the Z-axis, which is the spectral dimension. So, instead of just three colors, hyperspectral imaging or HSI captures images in dozens or even hundreds of narrow discrete bands. We are talking about slicing the light into intervals of maybe 10 nmters across a huge range.
That is incredibly detailed.
It is. And multisspectral imaging or MSI is kind of a cousin to this. It typically captures around 10 specific slightly wider bands that are targeted for a very particular biochemical marker.
That sounds like an overwhelming amount of data to process.
It is a lot of data. But what's fascinating here is that this immense data density within the hyper cube allows surgeons to effectively look inside the molecular structure of the tissue in real time
without contrasting agents.
Zero contrast agents, without physically touching the delicate tissue, and without freezing and slicing it. You can quantify hemoglobin and deoxyhemoglobin concentrations right there. Wow.
You can map out angioenesis instantly. You can even assess the exact water, lipid, and protein content of a highly specific region of interest. It is for all intents and purposes an optical biopsy.
An optical biopsy. That concept alone could completely redefine surgical oncology. But to achieve that optical biopsy, we obviously need the right hardware.
Absolutely.
And the review breaks down three primary methods for capturing that light. And it seems to come down to a pretty brutal physical trade-off between spectral resolution, spatial resolution, and time.
The classic engineering triangle,
right? So, let's look at the first method they highlight, which is spatial scanning. Sometimes called whisk broom or push broom scanning.
Right?
The paper notes this gives us incredibly high resolution spectral information, but it gathers it pixel by pixel or line by line.
And the mechanical reality of that process means it takes time. Gathering that data line by line can take several seconds to build the full hyper cube,
which in a laboratory setting is perfectly fine.
Sure, benchtop physics, it's great.
But in an operating room on a living, breathing patient whose organs are shifting with every respiration and heartbeat,
several seconds is an eternity.
It really is. It makes the system highly susceptible to patient motion artifacts.
It sounds kind of like trying to take a highdefinition panoramic photo on your phone while the subject is just walking around.
That is a perfect analogy.
One slight movement and the entire image gets distorted.
Exactly. If the tissue moves even a millimeter while the camera is scanning that specific line, the spectral data for that region no longer aligns with the spatial data.
So the hyper cube is scrambled.
Your hyper cube is essentially useless for precise margin assessment at that point.
So spatial scanning gives us the detail but it is just too slow for a breathing patient. The review then discusses the second method which is spectral scanning. Does this actually solve the time issue or does it just create a new problem?
Oh, it attempts to solve the spatial problem by capturing the highresolution spatial information of the entire surgical field all at once.
Okay.
And to get the spectral data, it utilizes tunable filters to switch wavelengths one by one. So the camera captures a full 2D image at 400 nm, then another at 410 nm, then or 20 and so on.
But wait, wouldn't the time it takes to mechanically or electronically cycle through all those specific wavelengths mean you are still vulnerable to motion artifacts?
It absolutely does. While the spatial alignment of any single frame is perfect. If the patient breathes between the capture of the 400 nm frame and the 700 nmter frame,
the Z-axis is messed up.
Exactly. The Z-axis of your Hyper Cube is now corrupted. The algorithm won't be able to accurately compare the spectral signatures of a single pixel because that pixel represents a different piece of tissue in different frames.
Which brings us to the third option they analyzed which are snapshot devices.
Right?
These capture both the spatial and the spectral data simultaneously. It happens at video rate meaning true real-time feedback for the surgeon.
Yes.
But the catch is that to achieve that kind of speed you have to sacrifice a bit of both the spatial and the spectral resolution. You basically can't have the absolute highest definition. in all three dimensions instantly.
It is a balancing act of physics. However, snapshot technology is advancing incredibly fast and for many clinical applications, the resolution they can achieve at video rate is actually proving to be more than sufficient to distinguish between healthy and malignant tissue.
Well, let's look at what actually happens when you get this technology into the clinic because the data here is staggering.
It really is.
The review screened 230 articles and narrowed it down to 29 studies and two clinical trials that met their really rigorous inclusion criteria.
And the bulk of the research right now is heavily focused on cervical neoplasia.
Yeah. Making up almost 59% of the studies and then ovarian cancer made up about 24%.
And when the authors aggregated the overall performance metrics comparing the spectral imaging readouts against the final hisytological pathology, the true gold standard,
the results were broad but impressive.
The overall sensitivity ranged from 75 to 100% and the specificity range from 30 to 99%.
Those are pretty wide ranges.
They are. But when you drill down into the specific tissue types and the specific technologies used, the clinical implications become very compelling.
So let's focus on cervical neoclasia first. The standard approach right now is a papsmear cytology, a conventional culposcopy and targeted biopsies.
Right? Standard of care.
And we all know that process relies heavily on the clinician's visual experience. which leads to significant interobserver variability.
Exactly.
In one comparative analysis highlighted in the review, hyperspectral imaging achieved a 97% sensitivity for detecting high-grade lesions, specifically CIN2 or greater,
which is fantastic.
To put that in perspective for you trailblazers, the standard papsmear in that exact same comparison only managed a 72% sensitivity.
And the true power of this technology emerges when you look at how it tracks dynamic biological reactions. What do you mean by dynamic?
So during a routine coposcopy, clinicians apply acetic acid which temporarily alters the light scattering properties of precancerous tissue,
right? It turns white
to the naked human eye. It just looks like the tissue turns white. But with multisspectral imaging, you can continuously track the precise temporal pattern of that tissue whitening across the spectrum.
So it's measuring exactly how it whitens over time.
Yes. The review highlighted a fascinating study demonstrating that high-grade CIN Three lesions exhibit significantly prolonged whitening taking over 500 seconds to relax back to their baseline state.
Over 500 seconds,
right? But a lower grade CIN2 lesion takes only about 200 seconds.
So you are not just looking at a static snapshot anymore. You are quantifying a dynamic cellular reaction in real time, which completely removes the subjective guesswork of how white Alion appears to a specific doctor on a specific Tuesday.
Exactly.
And the results of that quant medication are phenomenal. The review noted that using MSI culposcopy drops the false diagnostic rate down to an astonishing 1.7%.
It's incredible.
When you contrast that 1.7% with a false diagnostic rate of over 20% for conventional caposcopies and over 24% for papsmears, that represents just an incredible reduction in patient anxiety, misdiagnosis, and unnecessary invasive biopsies.
The clinical relief there cannot be overstated. However, the narrative becomes far more complicated when the authors shift their focus to ovarian cancer.
Right, let's talk about the ovaries.
The studies examining ovarian tissue also demonstrated incredibly high sensitivity, often ranging from 81 to 100%. One study utilized a multisspectral spatial frequency domain imaging system or SFDI.
SFDI. Okay.
This involves projecting specific structured light patterns onto the tissue to separate the effects of scattering from absorption. Using this, they found malignant ies had significantly higher total hemoglobin and reduced scattering amplitude,
which aligns perfectly with the dense angioenesis we expect to see in really aggressive tumors. But there is a snag with ovarian cancer detection that our trailblazers need to be aware of. While the sensitivity was phenomenal, the specificity numbers were highly variable and sometimes quite low.
This raises an important question about spatial heterogeneity within the abdominal cavity. Ovarian tumors are notoriously complex. You have hypervascular regions. You have pooling surface blood, areas of necrosis.
It's a messy environment.
Very. And blood acts as a massive optical sink. It absorbs huge amounts of light, heavily confusing the optical signals bouncing back to the sensor.
So, it throws the whole reading off.
Yes. In one autofllororescent study, the authors reviewed this spatial heterogeneity caused benign endometrias to be completely mclassified as malignant. It dragged the specificity down to a concerning 51.1%.
That is a lot of false positives that could lead to overly aggressive resections of completely benign tissue.
It is. But the researchers were able to implement mathematical normalization techniques, specifically a reflectance-based division method to actively correct for that optical heterogeneity.
They fixed it in the software essentially. Yes. By mathematically compensating for the pooling blood, they pulled the specificity up to 68.5% without hurting the sensitivity.
So it proves the data is actually there.
Yes. But Dealing with the messy reality of surgical blood remains a major engineering hurdle.
Well, before we leave the adexa, there is a fascinating development the authors included regarding the fallopian tubes.
Oh, the phaliposcope.
Yes, we know from recent pathology that most high-grade cirrus and ovarian cancers don't actually start in the ovary. They originate in the sangeeal lumen of the fallopian tube.
Right?
So, the review detailed a feasibility study for a device they are calling a falloposcope. It combines multisspectral imaging with optical coherence tomography or OCT which is essentially like ultrasound but uses light instead of sound waves for ultra high resolution depth imaging.
It's brilliant.
During his standard of care surgery, they were actually able to safely navigate this microendoscope into the incredibly narrow fallopian tubes in vivo to capture histology like images of the tubal epithelium. The potential for early detection and surveillance there is just staggering
really.
And it represents a massive engineering feat to miniaturaturize those sensors. But as the review makes clear, capturing this incredible data is really only half the battle,
right? Because of what we talked about earlier.
Exactly. Whether you are using a microscopic falloposcope, a snapshot camera, or a bulky spatial scanner, you're generating a hyper cube. And that raw multi-dimensional hyper cube data is completely unreadable to the naked human eye.
It's just a wall of numbers.
It is an overwhelming flood of numerical data spanning wave lengths we cannot perceive.
Here's where it gets really interesting because to translate that massive block of invisible spectral data into actionable realtime surgical decision-m we need artificial intelligence.
Yeah, I have it.
The review found that nearly 45% of the included studies relied on some form of AI based algorithm to interpret the hyper cube.
And the majority of the researchers are currently relying on classical machine learning models. They are utilizing cines clustering two group pixels with similar spectral signatures. They use linear discriminant analysis, support vector machines, and random forests.
Random forests essentially use a multitude of decision trees to basically vote on whether a pixel represents healthy or malignant tissue. Right.
Exactly. However, a few of the more recent studies are pushing into deep learning utilizing convolutional neural networks or CNN's.
Now, CNN's are the same type of visual recognition architecture that allows a self-driving car to distinguish a pedestrian from a stop sign. Correct.
In essence, yes. Instead of just looking at the spectral data of a single isolated pixel, a CNN looks at patches of pixels. This allows it to understand the spatial context, the textures, and the shapes of the tissue alongside the spectral data.
So, it sees the forest and the trees.
Exactly. In one study, comparing the two approaches, the CNN achieves significantly higher tissue type segmentation accuracy than a random forest model simply because it could utilize a broader, more contextual range of the complex space. ial and spectral information.
But the authors raise a massive red flag here regarding the black box problem which is obviously a major concern for us in healthcare.
It's a huge issue
in these early developmental stages. A lot of these studies are working with relatively small unbalanced data sets. And when you feed a small data set into a really powerful AI model like a CNN, the algorithm is prone to learning completely spirious correlations.
And a sturious correlation in this clinical context is highly dangerous
because it's learning the wrong lesson.
Exactly. The AI might perfectly associate a diagnostic label like malignant with an optical artifact. It might learn that a specific glare from the operating room lights or a unique reflection off a metallic surgical instrument always appears in the cancer images. It then uses that glare to diagnose the tissue rather than tracking the actual biomolelecular tumor markers it is supposed to be analyzing.
So the machine is getting the right answer in the lab for the completely wrong reason. Yes,
if you deploy that same algorithm into a different hospital with different overhead lighting or different retractors, it fails completely.
If we connect this to the bigger picture, it underscores the absolute necessity of explainable AI or XAI in the regulation of medical devices.
We can't just blindly trust the machine.
As clinicians, we cannot base an irreversible surgical resection on a blackbox algorithm that simply spits out a red or green heat map.
Surgeons need to know why the AI is flagging that specific tissue,
right?
We need to verify that the mathematical decision the model is making is biologically and clinically plausible. We need the system to say, I am flagging this region because the spectral signature indicates a 40% increase in deoxyhemoglobin coupled with altered collagen scattering.
So the surgeon can actually validate the physics against their own anatomical knowledge.
Exactly. It has to be a partnership.
Which brings us to the reality check because we always want to ground these incredible advancements for the trailblazers listening. Where is this technology actually sitting right now in the clinical pipeline.
That's the million-dollar question.
The authors mapped these studies onto the ideal framework for surgical innovation that tracks idea, development, exploration, assessment, and long-term study.
And they evaluated their technology readiness levels or TRL,
right? And the reality is this tech is largely sitting between TRL 4 and 7. It is firmly in the developmental and exploratory stages, specifically phases I and I.
We have fantastic IC lab validation and early clinical feasibility,
but we do not have the massive multic-enter randomized control trials yet that prove widespread clinical effectiveness against current standards of care.
And the physical roadblocks in the operating room are significant. We discuss blood acting as an optical sink and confusing the ovarian signals, but surgeons also have to contend with cervical mucus, bile, and unccalibrated shifting operating room lighting interfering with the spectral capture.
It's not a controlled lab environment. Not at all. Furthermore, for this to become truly ubiquitous in modern gynecologic oncology, we have to solve the miniaturization problem.
Getting it into the tools.
We need to figure out how to pack these highly sensitive highresolution hyperspectral sensors into minimally invasive 10mm laparoscopic and robotic tools without degrading the optical signal.
It was a tall order for the engineers for sure, but the clinical momentum highlighted in this review is really undeniable.
It's moving very fast.
So, So to summarize the landscape we have covered today, hyperspectral and multisspectral imaging are effectively functioning as real-time optical biopsies. By capturing the unique invisible spectral fingerprints of tissue, measuring everything from angioenesis to lipid content, they have demonstrated incredible sensitivity for detecting cervical and ovarian malignancies.
And when eventually paired with robust explainable AI, this technology could fundamentally reshape the concept of the smart hybrid operating room.
It offers the promise of giving you, the surgeon, molecular level vision in real time, drastically improving complete resection rates and ultimately long-term patient outcomes. So, what does this all mean?
It signifies that we are standing on the precipice of a new era in surgical precision. We are actively moving from a paradigm that relies solely on tactile feedback and the limited visible spectrum of human vision toward seamlessly integrating objective molecular signatures into our immediate surgical decision-making.
And that leaves us with a lingering thought for all the trailblazers out there. As these optical biopsies and explainable AI algorithms become rigorously validated and integrated into your standard surgical workflow, how will the definition of surgical intuition evolve?
That is a profound question.
If a machine can definitively see the microscopic molecular margin of a tumor that your human eye cannot, will future surgeons learn to trust the hyper cube more than their own lifetime of anatomical experience? Where ex Does human judgment end and the algorithm begin?
It's something everyone in the field needs to be thinking about.
It really is a philosophical and clinical boundary we are all going to have to navigate very soon. Thank you for joining us for this Journal Club deep dive on the digital pathology podcast. Keep pushing the boundaries and keep questioning the future of medicine. We will see you next time.