Digital Pathology Podcast

161: 7 Secrets to Smarter AI in Cancer Care | Lessons from NCCN Summit

Aleksandra Zuraw, DVM, PhD Episode 161

Send us a text

7 Counterintuitive Secrets from NCCN’s 2025 AI in Cancer Care Summit

When the National Comprehensive Cancer Network (NCCN) gathers healthcare leaders, people listen. I attended the 2025 Policy Summit on the evolving AI landscape in cancer care—and walked away with insights that were raw, practical, and surprisingly hopeful.

Instead of hype or overpromising, cancer care leaders shared honest strategies for implementing AI responsibly and effectively. In this episode, I break down the 7 counterintuitive secrets they’re using to fast-track adoption—while others remain stuck.

Whether you’re in digital pathology, oncology, or healthcare AI, these lessons matter for your projects.

KEY HIGHLIGHTS

  • 0:04 – Reporting from Washington DC: what the NCCN AI Policy Summit revealed about the real state of AI in cancer care.
  • 1:10 – Why NCCN guidelines shape cancer care worldwide.
  • 1:36 – Even top cancer centers struggle with AI implementation—why delays and budget overruns are common.
  • 3:16 – Secret #1: Stop chasing perfect AI tools—build strategic guardrail frameworks instead.
  • 6:20 – Secret #2: Plan for biological drift from day one.
  • 9:29 – Secret #3: Target underutilized care areas, not your strongest programs.
  • 12:07 – Secret #4: Design AI for patients receiving care, not just providers giving it.
  • 16:29 – Secret #5: Follow the pioneers—don’t reinvent from scratch.
  • 19:09 – Secret #6: Build flexible systems for evolving regulatory pathways.
  • 22:09 – Secret #7: Stop using human-level performance as the gold standard.
  • 31:23 – Why integration is now as important as innovation in AI for pathology.
  • 34:31 – What’s next: NCCN will publish a report based on these discussions.

THIS EPISODE'S RESOURCES

If this episode resonated with you, please share it with colleagues. Speaking the same language around digital pathology and AI implementation will help us all move forward.

🎧 Thank you for trailblazing with me. Until next time, keep trailblazing however you can.

Support the show

Get the "Digital Pathology 101" FREE E-book and join us!

0:00: Welcome my digital pathology trailblazers.
0:04: Last week, I went to Washington DC where I covered the NCCN Policy summit on the evolving artificial intelligence landscape in cancer care.
 0:14: And let me tell you, what I discovered there, what I heard there from the cancer care leaders in the US.
 0:22: was very hopeful and pretty honest.
 0:27: I came out of there with a really positive attitude towards how AI can improve cancer care.
 0:36: Let me tell you, there was no hype, no overpromising, but very honest, raw real life examples of what is and what could be done.
 0:48: [Learn about the newest digital pathology trends in science and industry.
0:52: Meet the most interesting people in the niche and gain insights relevant to your own projects.
0:59: Here is where pathology meets Computer Science.
1:03: You are listening to the digital pathology podcast with your host, Doctor Aleksandra Zuraw.
]
 1:10: So taking one step back for those who might not be familiar, NCCN is the National Comprehensive Cancer Network. 
 1:17: This is basically the organization that defines cancer care guidelines worldwide. 
 1:23: When they put something in the guidelines, the entire oncology world listens. 
 1:29: Picture this, and I'm sure many of you have experienced similar things of or heard of similar things. 
 1:36: You're sitting in the meeting room, explaining to your leadership team why your AI initiative is maybe 6 months behind schedule and a bit over budget, while others are successfully implementing it and seeing the results. 
 1:52: So you're not alone in your frustration. 
 1:54: Because even the most well funded cancer care institutions are struggling with some implementation challenges that keep healthcare leaders awake at night. 
 2:04: But here's what I learned that may change a few things for you. 
 2:08: I discovered 7 counterintuitive secrets that successful healthcare leaders are using to fast track their AI implementation. 
 2:18: While others remain stuck or just don't touch it at all. 
 2:22: So let's dive into it. 
 2:24: But before we get to those secrets, let me give you some context about what I witnessed at this summit. 
 2:28: This wasn't a typical academic conference. 
 2:31: These were cancer care leaders from across the healthcare spectrum. 
 2:36: And so we're talking about representatives from major cancer centers, policymakers, pharmaceutical companies, pathologists, radiologists, oncologists, AI experts as well. 
 2:47: And the most interesting thing, most striking thing also was the diversity of perspectives. 
 2:55: Not everybody agreed with each other, not everybody was equally enthusiastic about AI, but everybody recognized the potential and everybody was grappling with the same fundamental question, how do we implement AI responsibly and effectively? 
 3:12: So this brings me to the first counterintuitive secret. 
 3:16: The secret number one is to stop chasing perfect AI tools and build strategic frameworks instead. 
 3:23: Here's what caught everyone off guard at the summit, and this was super surprising to me as well. 
 3:29: The most successful AI implementations aren't using the best tools. 
 3:34: So it's not about, you know, the newest model, CNN's versus Transformer based models for image analysis or things like that. 
 3:41: And the most successful implementers are using strategic guardrail frameworks that prevent bottlenecks while maintaining safety. 
 3:53: So let me give you a real world example that was discussed. 
 3:57: Maryland has this new AI regulation, it's going to be effective on October 1st, 2025. 
 4:04: Now, instead of banning AI or requiring perfect validation, Maryland created specific framework requirements for health insurance carriers and pharmacy benefit managers. 
 4:16: So this is for health insurance, but here's what's interesting about this approach. 
 4:22: They prohibit using AI to deny, delay, or modify care. 
 4:27: So in these instances, delay, deny, or modify, they require human provider confirmation. 
 4:35: But only at these critical decision points, not everywhere, just at these points where decisions could significantly impact patient care. 
 4:44: So you're not going to make a human click approve for every single AI assisted thing, but at those critical points you will. 
 4:52: So the clinical world can apply a similar logic. 
 4:56: I heard about. 
 4:56: AI case prioritization systems where AI automatically prioritizes clinical findings for immediate review or radiology findings or whatever findings, right? 
 5:07: But a radiologist must confirm before any urgent clinical communication goes out, and the result is faster clinical case identification without compromising safety. 
 5:19: So you prioritize. 
 5:20: This is basically kind of low safety assistance from AI. 
 5:27: And both approaches follow the same principle. 
 5:30: Let AI operate efficiently in low risk areas, but require human confirmation where decisions could significantly impact patient care. 
 5:40: And this is important in digital pathology as well, because we need to ask ourselves how not to get caught up in the perfectionist trap. 
 5:50: We want the AI to be 100% accurate before we even consider implementation, but what if we could identify areas or ways of deploying it where strategic guardrails could be more effective than These super high accuracy tools. 
 6:08: I'm not saying to use bad tools, but to figure out a way to leverage the tools we have in a responsible way that already benefits patients. 
 6:19: Secret to. 
 6:20: Embrace biological drift instead of fighting it. 
 6:24: Many healthcare leaders at the summit expressed concern about the same issue. 
 6:27: What happens when our AI stops working? 
 6:30: What happens when your model drifts? 
 6:33: And the counterintuitive insight that emerged was that successful implementations plan for biological drift from day one. 
 6:43: So let me explain what this means because this concept was pretty confusing to me initially. 
 6:48: The AI model itself doesn't change or drift if we like don't keep training it, right, because the question that I hear often, OK, what if the tool changes, like what if there the tool has some kind of reinforcement learning capabilities or something and it changes. 
 7:07: Well, if you deploy it, you cannot have this like change autonomously. 
 7:12: But even if the model remains exactly. 
 7:15: Same, like the same code, the same algorithms are deployed. 
 7:18: What changes is the biological reality around us. 
 7:22: So this we cannot control for. 
 7:23: Patient populations evolve, disease presentation shifts, demographics change, so basically people change. 
 7:31: So while Your AI model stays the same, identical, its performance can degrade because the world, it's analyzing has moved away from the training data it learned from. 
 7:43: So like the people, and I see it in the pre-clinical world where you have this concept of historical control data. 
 7:52: And historical control data is from animals that you previously analyzed and you can kind of base your judgment for the studies that you're reading on what was there before, but this historical control data goes only 5 years back because the biology of the animals changes, right? 
 8:12: So we have to account for that. 
 8:13: So what was the summit solution instead of trying to build perfect models that How anticipate future biological change. 
 8:21: The summit expert discussed the concept of building self-aware AI systems that can detect when their performance drops below acceptable threshold, and it doesn't sound like too crazy to me. 
 8:33: Basically, when, you know, you start getting results that are below some acceptance criteria, the model is not going to give you any more, The model is going to tell you that it's performing bad and not produce results. 
 8:48: One panelist noted, and I'm quoting here, We need AI that can say, I'm not performing well, and we need AI that can say I'm not performing well enough anymore and stop giving predictions. 
 9:00: And I think it's interesting because rather than spending months trying to predict possible biological change, you build monitoring systems that detect when performance degrades and alert human oversight. 
 9:13: So in digital pathology and in any other medical discipline, this means we need to think beyond the initial deployment. 
 9:21: And we need to plan for how we'll monitor our AI tools over time and what happens when they need recalibration. 
 9:29: Secret number 3 target underutilized care areas, not your strongest programs. 
 9:35: The intuitive approach for many healthcare leaders, and I kind of was subscribing to this approach as well, is to implement AI in the best. 
 9:44: For departments first, areas where you have strong workflows, experienced staff, proven processes, but the summit revealed a different strategy that may be more beneficial to target areas where standard care is already underutilized or like underperforming or difficult to implement consistently. 
 10:05: And one of the panelists. 
 10:08: Travis Osterman from Vanderbilt highlighted this challenge pretty well. 
 10:14: He said we don't even do molecular testing consistently, so let's not think of AI until we fix the basics. 
 10:23: But a counterintuitive insight here is that at least for pathology, this is where it gets exciting because what if AI? 
 10:31: I could help you skip the traditional implementation barriers entirely. 
 10:36: And what am I talking about here? 
 10:38: You might have heard me talking about this particular application, so molecular testing prediction from pathology images. 
 10:45: Instead of requiring expensive next generation sequencing on every sample, A can predict molecular mutations from center histology and triage cases for targeted testing. 
 10:58: And that reduces cost very significantly and improves access and. 
 11:04: There are places where there's no option for molecular testing, so if I was in that place and had my tissue analyzed, I would rather have a strong prediction than nothing. 
 11:16: So this is basically technology leapfrogging where you are using AI to bypass traditional bottlenecks rather than trying to perfect the traditional process first. 
 11:28: And for digital pathology specifically, And there are other areas where we could leverage the technology leapfrogging once the technology is ready for that, for example, glassless pathology or less cutting edge, this could mean looking at areas where we struggle with consistency, consistency or access, maybe primary biopsy interpretation in underserved areas, or complex stain interpretation where expertise is limited. 
 11:59: And anything that will give us a benefit over what is there but not compromise what is already there. 
 12:07: Secret for designed for patients receiving care and not just providers giving it. 
 12:12: This insight was a very interesting blind spot at the summit, and honestly it was an eye opener for me as well. 
 12:20: One of the panelists who had been a cancer patient herself shared the story that really stuck with me. 
 12:26: After chemotherapy, she received a multi-page list of potential side effects. 
 12:32: I don't know, she said like 3 pages or 4 pages of like what could possibly happen to her after the treatment. 
 12:39: And she asked the doctor, OK, which one will I get after this treatment? 
 12:42: And the response was, well, we don't know, you will tell us. 
 12:46: This is when it became clear in the room and, and it was pretty powerful moment. 
 12:51: That we've been focusing heavily on how AI can make providers better at giving care. 
 12:56: It's like to provide better care for patients, but we've given less attention to how patients experience receiving care, and Dr. 
 13:04: Goodman, who was the summit moderator, emphasized this beautifully. 
 13:09: He said patients are going to want to benefit from AI that's also personalized and patient focused and brings across the priority information in a clear way. 
 13:20: So basically the most effective AI implementations consider both sides of the care equation. 
 13:26: That provider giving perspective, AI that helps doctors diagnose fasters, optimize treatment protocols, streamline work flows, improve schedules for nurses, and all these things, but equally essential, but equally essential is the patient receiving perspective. 
 13:45: A that helps patients understand the diagnosis in accessible terms, personalizes patients' education, and I firm. 
 13:53: I believe in personalized education, especially in adult education and in person settings, which is what we have in healthcare. 
 14:02: I'm kind of passionate about learning as an adult and teaching adults. 
 14:08: I know I'm digressing a little bit, but after having studied for several exams and like figuring out the best way for me and then trying to teach stuff online and offline, I see that the best way for me is not always The best way for somebody else, because this guiding principle of never underestimate your audience's intelligence, but never overestimate their prior knowledge, attitude, and beliefs and all that stuff kind of brings you to the concept of personalized education. 
 14:40: Like we always talk about personalized medicine, checking all these biomarkers. 
 14:45: The same counts for education. 
 14:48: And going back to cancer education and cancer, basically cancer information for patients, they only need the information that they need. 
 14:56: They don't need all the other information like the panelists had like 5, I don't know how many pages I said, but a lot of pages of potential side effects. 
 15:05: What is she gonna get? 
 15:07: What does she need to mentally prepare for physically? 
 15:11: What does she need to tell the people that are helping her in her cancer journey? 
 15:16: So, anyway, big proponent of personalized education. 
 15:20: And with AI we can do that. 
 15:23: So that would be AI that helps patients understand their diagnosis and accessible terms, personalizes patient education, prioritizes relevant information instead of overwhelming people. 
 15:34: And for digital pathology, this means thinking about how AI tools affect not just the pathologist's workflow, but how they could impact the patient experience. 
 15:47: So obviously there is this, OK, are we making the diagnosis faster? 
 15:50: Are we making it more accurate, but are we making it more understandable for the clinical teams who need to communicate with patients, patients have access to their pathology reports, so should there maybe be a section that is Very directed at patients up to us to figure out what would be the receiving care part of, of the pathologist's workflow. 
 16:15: And I had a fantastic guest on the podcast, Dr. 
 16:18: Lija Joseph. 
 16:19: She is a pioneer in interacting with patients as a pathology and helping them understand their disease. 
 16:27: I'm going to link to this episode in the show notes as well. 
 16:29: Secret 5, Follow the pioneers. 
 16:32: Don't reinvent from scratch. 
 16:34: The summit revealed something that could save you 12 to 18 months of development time, and it's just an estimate. 
 16:40: I don't know how much time it could actually save, but the thing is that successful healthcare leaders aren't starting from scratch. 
 16:47: They are following the pioneers. 
 16:49: Something that was repeatedly referenced were the CMSAI initiatives, and as an example to learn from and not compete with, and I want to highlight this, not compete with. 
 17:01: So one panelist noted. 
 17:03: And this was pretty practical advice. 
 17:05: Follow the pioneers and build on top of what they did. 
 17:09: Don't start from scratch if you don't have to. 
 17:11: And I think there is in the whole world, there is always this like fear of missing out on technology and when something emerges and somebody has done it, instead of modeling what they did, we try to compete to like be at the same level and not be worse or or basically take part in the revolution but just letting somebody do it. 
 17:35: And following is can be a good approach, and it works because you learn from their mistakes without making them yourself. 
 17:45: You reduce deployment risks and you accelerate time to implementation, as well as avoiding regulatory pitfalls they've already navigated, and the summit presentations showed multiple successful implementation already in progress. 
 18:00: ambient scribe, which is voice recording during visits with patients that is then transcribed into patient notes and basically is creating medical documentation on the fly, letting the doctor actually interact with the patient. 
 18:19: And not type all the time. 
 18:21: There were examples given for surgical planning optimization, infusion scheduling algorithms, and radiology critical alerts. 
 18:31: And instead of viewing these as competition, smart leaders are studying them as road maps, and it's never going to be 1 to 1 following somebody, but this word roadmap basically gives you a framework. 
 18:45: OK, what did they do? 
 18:46: How could I do it at my institution? 
 18:50: And for pathology, we could look into similar fields like radiology AI deployments, or just other successful digital pathology implementations in other institutions, study their approaches, learn from their challenges, and, if possible, adapt their frameworks to our environment. 
 19:09: Secret 6, build flexible systems for evolving regulatory pathways. 
 19:14: Here's what few want to admit, and this was a recurring theme throughout the summit, regulatory pathways for AI remain complex and evolving, even for successful companies. 
 19:26: But here is another counterintuitive insights. 
 19:30: This complexity is actually your competitive advantage if you know how to navigate. 
 19:36: So an example is Arterra AI received an FDA denova authorization in August 2025 for their AI digital pathology software for prostate cancer. 
 19:47: It's a prognosticating software. 
 19:49: They became the first AI powered software authorized to prognosticate long-term outcomes for patients with non-metastatic prostate cancer. 
 19:58: Now, here is the gap that was highlighted at the summit. 
 20:03: While the FDA has approved 950 AI medical devices between 1995 and 2024, Arterra is currently the only AA cancer test included in NCCN guidelines, even though there are plenty of other. 
 20:18: AI tools existing and this gap reveals a key insight that FDA approval and clinical guideline inclusion are entirely different process with different requirements. 
 20:30: And I was talking to Arterra's representatives and that, and I asked them like how did you manage to get it in the guidelines so fast? 
 20:38: And they said they basically did their tool development with that in mind. 
 20:44: So not only the FDA clearance, but also inclusion in the guidelines that later actually guide treatment. 
 20:53: And there was another strategic advantage of Arterras. 
 20:57: Their de novo authorization included a predetermined change control plan. 
 21:02: So this is something that FDR. 
 21:04: Offers and that allows software updates without further 5 10K submissions. 
 21:10: This regulatory flexibility enables rapid iteration if you need to change the tool and as we just said in the previous secrets that the biological drift is inevitable, so you will need to change your tool at some point, right? 
 21:26: So for the digital pathology community this means building. 
 21:30: Implementation flexibility into your AI strategy rather than betting on one regulatory pathway and keeping it rigid, but we already know that because AI like exposed us to a lot more built-in flexibility compared to the previous ways of doing software as a medical device. 
 21:53: So here now there is this predetermined change control plan that one can leverage. 
 21:58: To address that potential flexibility of the AI tools, Secret 7, stop setting human level performance as your AI goal. 
 22:09: An interesting debate emerged about a performance expectations, and it made me think differently about how we evaluate AI tools. 
 22:19: I kind of had these thing differently moments before, but then often the mind goes back to the default, so it took my mind out of the default approach again, and the traditional approach sets human level performance as the gold standard for AI validation. 
 22:35: But summit experts questioned whether this benchmark actually makes sense. 
 22:40: So an example was discussed about the Rosh AstraZeneca TROP 2 test, which received FDA breakthrough device designation in 2025, and this is an AI powered diagnostic that measures the ratio of TROP 2 protein expression between tumor cell membranes and cytoplasm. 
 22:58: So you see it, but you cannot visually measure the ratio, right? 
 23:03: This provides a level of diagnostic precision that is basically not possible for visual scoring methods. 
 23:10: So there is no human equivalent to compare it to. 
 23:14: And in this case, setting a human level performance benchmark would be impossible. 
 23:19: because humans cannot manually perform this quantitative ratio calculations across thousands of cells with the same precision, or I would argue with like none of the precision. 
 23:30: Like I am not able to do a reasonable ratio even on one cell. 
 23:36: It's a little bit like calculators. 
 23:37: Calculators didn't make us worse at. 
 23:40: Math, but they freed us to solve more complex problems, and AI tools can exceed human capabilities in specific domains while human focus on higher level decision making. 
 23:52: As always, there's another side of the coin, and we need to find balance because there was a study published in The Lancet in 2025 that showed Endoscopists poly detection skills declining after regular AI assistance. 
 24:09: So we must guard against overdependency while not limiting AI to human level performance, especially in cases when you can demonstrate that you can do better. 
 24:20: And in the digital pathology space, there is a fantastic paper by a friend of mine. 
 24:26: Franke Efner, the gold standard paradox, and in our world, this means setting evidence standard based on the intended use, potential impact, and the actual capabilities of the technology. 
 24:41: And not arbitrary human level level benchmarks and the classic is annotations, OK, you have pathologists annotate different structures and then AI model is recognizing those structures. 
 24:53: OK, every pathologist is gonna annotate differently. 
 24:57: My favorite, super specific example for this is the chameleon Challenge. 
 25:02: Chameleon Challenge was a challenge, computer vision. 
 25:05: Challenge for detecting cancer metastasis in lymph nodes. 
 25:10: So what you're looking for are epithelial cells in a section of a lymph node, and then you would have pathologists annotate it, and that was the ground truth, but they also had bar ground truth, and the bar ground truth was immunohistochemistry staining for cytokeratin, which is a marker of epithelial cells. 
 25:28: So you would also see how pathologists differed from that, I call it better ground truth and better because it's objective, because it's based on chemistry and not on a visual perception of a human. 
 25:46: And you would see instances where there were epithelial cells stained within. 
 25:52: The lymph node that were missed by the pathologist. 
 25:54: So then instead of comparing the performance of the algorithms, AI algorithms, detecting those epithelial cells against manual annotations of a pathologist, you could actually compare it against the ground truth generated by immunohistochemistry. 
 26:08: So the human level benchmarks are pretty arbitrary. 
 26:12: Sometimes they're fantastic, sometimes they're not that fantastic. 
 26:16: OK, so let's take one step back and look what these seven secrets tell us about the current state of AI implementation in healthcare. 
 26:24: I don't think that what the summit revealed about 2025 was what most healthcare leaders expected. 
 26:30: Healthcare providers are already using AI tools, but not consistently and not systematically, and the gap isn't in AI capability, it's in implementation strategy. 
 26:42: Successful healthcare leaders are taking a strategic multi-track approach. 
 26:47: Track one would be strategic framework building, focusing. 
 26:51: On guardrail placement rather than perfect tools. 
 26:54: Then track 2 patient centric design, building AI for both caregiving and care receiving experiences and track 3 capability appropriate standards setting evidence requirements based on what the technology can actually achieve. 
 27:10: Those succeeding. 
 27:11: Leading in AI implementation in 2025 aren't trying to implement everything at once, so all these principles are for single use cases. 
 27:20: It's not that like everybody is now using AI all over and across the board. 
 27:25: These are single places where AI is or has been implemented successfully. 
 27:32: And they're executing all three tracks simultaneously while building systems that can adapt as regulatory clarity emerges. 
 27:42: How would it specifically apply to digital pathology? 
 27:47: First, when we're evaluating AI for AI tools for pathology, I would say let's not get caught in the perfection trap in the classical way of evaluating these tools. 
 27:59: Instead, let's ask, where can we place strategic Guardrails that maintain safety without creating bottlenecks. 
 28:07: And this may not be a diagnostic application, maybe it is prioritization application or some other workflow streamlining application, or maybe there is a way to guard rail a diagnostic application, but it doesn't have to be. 
 28:23: Second, for diagnostic applications, and we should plan for biological drift from day one. 
 28:30: The AI models will encounter new tumor types, new staining protocols, new demographics, so building monitoring into the implementation strategy is crucial. 
 28:41: Third, let's look for areas where standard pathology practice is already underutilized, maybe telepathology in rural areas or remote areas, or specialized stain interpretation where expertise is limited, or maybe. 
 28:59: Virtual staining, maybe glassless pathology, maybe molecular predictions from H&E images. 
 29:07: These might be better AI implementation targets than the strongest programs or the strongest diagnostic areas. 
 29:16: Fourth, let's consider the patient impact on the AI implementations. 
 29:21: How does faster, more accurate pathology diagnosis ultimately improve the patient experience? 
 29:28: Let's make sure We can articulate this connection and here like faster and more precise again, doesn't need to be during the diagnostic step. 
 29:38: It can be during the workflow step, it can be during the material, diagnostic material creation step. 
 29:45: Like let's look at the full tissue journey from being taken from the patient to being delivered to the patient back as a pathology report. 
 29:53: OK, what can we change there? 
 29:55: Like, let's look at it a little bit more holistically. 
 29:58: 5th, let's study successful implementation in similar fields. 
 30:02: So all the trailblazers who already did this, who already went digital, who already have a device cleared, a scanner cleared. 
 30:10: Let's learn from them and also for pioneers in radiology AI. 
 30:15: I am super happy to see that at the Society for Imaging Informatics. 
 30:21: In medicine, we have more and more pathologists being represented because we are an imaging specialty as well. 
 30:28: It's not only radiologists, but let's see what has been done in the radiology space. 
 30:33: They are very much ahead of us in terms of the logistics of digitization. 
 30:39: Obviously they have a different process, but AI is being developed in parallel in both. 
 30:45: Branches of medicine, right, in both specialties. 
 30:48: So, OK, let's learn from each other what worked better in pathology and what worked better in radiology. 
 30:54: How can we leverage our learnings? 
 30:57: 6th, let's think about building flexibility into the regulatory strategy. 
 31:03: The landscape is evolving super fast, and the old rigid approaches, well, will probably not serve us anymore. 
 31:12: And then finally, let's set performance standards based on what the technology can actually achieve and not on arbitrary comparison to human performance. 
 31:23: So where does that leave us? 
 31:25: Where, what are the implications for the future of digital pathology and AI implementation in medicine? 
 31:30: I think what we're seeing is a maturation of AI implementation strategies, which I love. 
 31:36: We're moving beyond the, well, look what AI can do phase in. 
 31:39: To the how do we actually deploy this responsibly and effectively because we already know it can do a lot and now the question is, OK, how can we leverage this for the benefits of the patients for digital pathology, I think we're going to see more focus on integration rather than innovation or. 
 32:00: Maybe not rather, but in parallel, integration is going to be as important as innovation because we already know that AI can help pathology. 
 32:09: The question is how to implement it in a way that is sustainable, scalable, beneficial for all stakeholders, including patients. 
 32:18: As recipients of care. 
 32:20: So what I've seen at the NCCN summit was the organizations that are successful are those who think about it systematically and not as single use applications. 
 32:32: And also they're not just buying the AI tools, but they're analyzing and redesigning workflows, updating policies, training staff, and preparing for long term technology evolution, and I think something I want to highlight here is that Analyzing and redesigning workflows because our workflows, first we have to very well understand them and the bigger the organization, the more specialized everybody in the organization is. 
 33:03: So understanding each other's workflow to be able to apply AI systematically is a challenge, like it's a labyrinth to navigate a full organization. 
 33:13: So, but analyzing and then redesigning because the workflows that we have right now, even like the good ones, they were not designed with AI in mind, right? 
 33:26: So everything that is paper-based. 
 33:29: OK, now we have large language models and you can scan this stuff, but here again, you're digitizing analog. 
 33:36: Where the new way would be not to create an analog record and there is a mix up of paper and digital digital informs of typing, digital inform of. 
 33:49: Dictating, so there's a mixture of inputs, and some inputs are better suited for building these AI workflows. 
 33:57: So that's super important. 
 33:58: So what are the next steps after the summit and the summit had an interesting format because we had panel discussions. 
 34:06: Of experts and then the participants would go with facilitators from NCCN and also with the experts to discuss so the participants in those groups after the panel discussions had to answer questions, OK, what are the challenges how can regulators help and. 
 34:23: And basically we're discussing the topics notes were being taken and based on these notes, a report is gonna be issued. 
 34:31: So whenever that's out, I'm gonna let you know and the resources from today's episodes you can find them in the show notes and if this resonated with you, I would love if you could share it. 
 34:43: With your colleagues, I think something that's gonna help us advance digital pathology and AI is if we speak the same language and if we basically are informed what's going on. 
 34:54: I believe that something that's gonna help us advance digital pathology in AI is when we are in the know what's going on at. 
 35:03: Organizations and institutions that are leading the way and also if we speak the same language so thank you so much for sharing. 
 35:10: Thank you so much for joining me today, staying till the end. 
 35:14: It means you are a real digital pathology trailblazer, so keep trailblazing however you can and I talk to you in the next episode.