Digital Pathology Podcast

157: How Academic Pathology Programs Can Prepare for AI | UPMC Podcast

Aleksandra Zuraw, DVM, PhD Episode 157

Send us a text

“AI in Pathology Isn’t Coming — It’s Already Here. Are You Ready?”

From confusion to clarity — that’s what this episode is all about. I sat down with Drs. Liron Pantanowitz, Hooman Rashidi, and Matthew Hanna to dissect one of the most important and comprehensive AI-in-pathology resources ever created: the 7-part Modern Pathology series from UPMC’s Computational Pathology & AI Center of Excellence (CPAiCE). This isn’t just another opinion piece — it's your complete guide to understanding, implementing, and navigating AI in pathology with real-world insights and a global lens.

Together, we discuss:

  • Why pathologists and computer scientists are often lost in translation

  • How AI bias, regulation, and ethics are being addressed — globally

  • What it really takes to operationalize AI in patient care today

If you’ve ever asked, “Where do I even start with AI in pathology?” — this is your answer.


🔍 Highlights & Timestamps
00:00 – The importance of earned trust in AI
01:00 – Education gaps in AI for both pathologists & developers
03:00 – Why CPAiCE was built & the three missions it serves
07:00 – The seven-part series: a blueprint for AI literacy
10:00 – Making AI education accessible without losing technical integrity
13:00 – How this series is being used for global teaching (including by me!)
17:00 – Generative AI in creating figures vs. human-authored content
21:00 – Eye-opening global AI regulations that pathologists MUST know
24:00 – Ethics, bias & strategies to mitigate real clinical risks
30:00 – What’s next: CPAiCE’s mission to reshape pathology education & practice
34:00 – A teaser: the first CPAiCE textbook is on the way!


📚 Resources from This Episode

📰 Read the full series (open access!):
 Modern Pathology 7-Part AI Series: https://www.modernpathology.org/article/S0893-3952(25)00001-8/fulltext

👨‍⚕️ UPMC’s Computational Pathology & AI Center of Excellence (CPAiCE)
 🌍 Creative Commons licensing means YOU can reuse, remix & teach from these resources — just cite the source.



Support the show

Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!


Ethics and Trust in AI

Matthew: [00:00:00] Ethics isn't an afterthought as sort of part of the responsible AI development, because I think the trust in AI is earned. We really shouldn't assume it.


The Role of Education in Computational Pathology

Liron: Education was key for us and that was because most pathologists we felt had gaps when it came to AI. And most technical people like computer scientists working in AI, had gaps in pathology 'cause they were not trained that way.

Aleks: Keywords: educate, and the trust is earned. De-demonized the bias and through education you basically make people aware how it works, especially in the aspects where people are most afraid of 'cause the thing people are most afraid of, that it's gonna hurt schema and it can hurt through, you know, one-sided data.

The moment you know how it works. You know how to mitigate the risks that come with the AI tools.


AI’s Role in Improving Patient Care

Hooman: So this would be our way to leave a mark, enable more pathology lab medicine departments to incorporate computational pathology and AI [00:01:00] into their workflow landscape because we honestly feel that without it, you could be compromising patient care in the future.

Aleks: Welcome my digital pathology trailblazers. Today I think I have the largest amount, the highest amount of guests I have ever had on the podcast. I have three guests Dr. Liron Pantanowitz,  it's Dr. Hooman Rashidi, and Dr. Matthew Hanna from University of Pittsburgh Medical Center, who are now co-creating the Computational Pathology and AI Center of Excellence.

Welcome everyone. How are you today, guys?

Hooman: Good, thank you so much. 

Liron: Great start out.

Aleks: Because I have so many of you, I'm, I have a couple of questions, and the main topic of our podcast is the 7 Part AI series that you co-authored, that is basically a super comprehensive and very up-to-date [00:02:00] guide for pathologists and medical professionals about AI in pathology.

So Liron, I'm gonna start with you. Let's give a super brief introduction to those who may not know you yet, even though you're a pretty famous person. You all are pretty famous people in the digital pathology world, but for those who are just starting this journey, a couple of words about you and about the CPAiCE, the Computational Pathology and AI Center of Excellence at UPMC.

Liron: Yeah, thanks Alex, and thanks for inviting us all onto this podcast to discuss. Our favorite topics, which is computational pathology. For those of you who don't know me, I'm the chair of pathology at the University of Pittsburgh in Pittsburgh. And I came back two years ago to chair the department, and one of the first things I did was create a division of computational pathology and informatics.


The Future of Computational Pathology at UPMC

I think it's important to have a formal computational pathology division because that's necessary  [00:03:00] for current and future practice. 'Cause we don't know enough about computational pathology. I felt it was necessary to create a center within that division, which is our computational pathology and AI center of excellence.

And to help actually run that center, I recruited Dr. Hooman Rashidi and together we recruited Dr. Matthew Hannah to participate in the center. Our Center, it has three missions: an academic mission, two solve problems. And while we are doing one and two, if we can generate intellectual property, why not?


The Role of Education in Computational Pathology

So education was key for us. And that was because most pathologists we felt had gaps when it came to AI. And most technical people, like computer scientists working in AI, had gaps in pathology 'cause they were not trained that way. The three of us contribute to the literature, and we read and we peer review a lot of literature.

And we noticed that there are two types of publications out there related to computational pathology. [00:04:00] And there are a lot, there are tens of thousands of publications on computational pathology. If you do a PubMed search, many of the papers of pub are opinion pieces. Where people, you know, discuss the hype and topics like ethics related to pathology, but they don't necessarily use it or develop AI.

On the other hand, there's literature for folks that actually develop the AI coming from a computer vision background, and their publications are very technical and often difficult for folks in the laboratory to understand the technology and figure out how do we actually apply it to our routine practice.

'cause of our academic mission to share and disseminate what we've learned through computational pathology. The three of us got together and decided to provide at least some contribution to the literature that would address computational pathology from A to Z. That covers all topics relevant to pathologists, [00:05:00] those training in pathology, but also relevant to computer science data engineers, and industry folks, and so that's how this series got born.

Aleks: Thank you so much. Hooman, you're gonna be my next person to introduce yourself. And I'm just gonna give a super quick background. And the first time we met was in 2023. We were both speakers at the Veterinary Pathology Conference at the, an annual meeting of the A CP, American College of Veterinary Pathologists.

And then I think we were. The ones that would geek up most about AI. So at the dinner they set us together. They spent like two hours talking about things. So I knew the center was coming. I knew that the AI paper series was coming, but why the series, why now? And what did you think was missing that kind of triggered creating this?

Hooman: I'm Hooman Rashidi [00:06:00], and I'm the executive director of our AI center, CPAiCE, which is our Computational Pathology and AI Center of Excellence. I'm also the associate Dean of AI for the University of Pittsburgh School of Medicine. And I'm wearing two different hats, but they're very complimentary.

So I would say a couple of things. One is as Liron had sai,d the AI articles that he had written, Matthew had written, I had written and others had written. One was there's some great articles out there and that's great. And some of us had written some of those and others have written them and we've been using a did some of them.

But there have been no and there, there have been no cohesive one series in the same journal that kind of follows the path. That's relevant if you would want to use [00:07:00] it for a course, for example, should. Given that as part of my job here at Pitt, the dean wanted us to also be the global leaders or one of the global leaders within AI literacy for medical students and graduate students within different health sciences.

We obviously needed the right reference material that would be able to also support that. So there was a double whammy here, where as Liron said, we are trying to educate the vast majority of our pathologists and laboratory professionals in terms of AI, you know, in their day-to-day practice.


AI Literacy for Medical Students and Professionals

So how could they assess tools properly? How could they assess literature appropriately? So just a primer, if you will, that could let them kind of dive in there and be able to you know, do it appropriately and in a right, efficient way [00:08:00] and scientifically sound. But secondarily, how do we democratize AI in terms of AI literacy for our upcoming, you know, residents, fellows, medical students, graduate students, and…

So this was an opportunity where CPAiCE was leading the effort in terms of bringing together the right group of experts that would be able to tackle those exact needs. So by bringing the right group of authors that would be able to, both technically and also, you know, in practice, you know, rights, they could then create a series that.


Inside the Seven-Part AI Series

Which serve the needs of the, not only the community as a whole if they read these independently, but also as this series becoming the primer for our AI course within our medical school slash, {00:09:00] you know, for our graduate students and medical students. So that's, that was the main reason of bringing it into the space was kind of like.

Serving both needs. And then the other thing that was important to us is as, and again, Liron touched on that, was we also wanted to make sure that the write-ups were very accessible and ingestible by everyone who would read them. Because we know that the vast majority of the people who are gonna be reading these, so your average reader, the average, you know, person who's gonna be accessing these articles.

And I, quite frank, 99% of the people who are interested in AI, they’re not necessarily well versed within the coding aspects. And also when you bring in, you know, coding elements and also mathematical elements in there, you will lose the audience. So we also wanted to make it [00:10:00] accessible from a content standpoint, so that it was more around concepts.

Rather than very detailed technical aspects, but still incorporating technical needs for those who actually had that extra, you know, if you will, you know, requirement within their particular framework. So that's where the series was born in terms of, and we walk people in the series from basically the first one, which is around terminologies. And so it covers 200 or close to 200 of the most commonly used terminologies in machine learning and AI with definitions and examples. And that's from what we know, it's one of the most extensive glossaries of terms that we know, along with a quick little, general introduction of what is general.

Generative AI, [00:11:00] what is non-generative AI? And then following through article two, it's basically a journey, an Article 2 that goes into a deeper dive into the generative AI aspects and what are, what is ChatGPT, what are custom chatbots, and so on and so forth, their advantages, limitations, and all of that.

And then the third one goes after around non-generative AI, things that we've been using for decades that people should be familiar with. Cancer diagnosis versus normal. So on sepsis versus no sepsis, so on, so forth. And then the fourth one, which is around statistics of AI of both generative AI and non-generative Ai in terms of what are their similarities and differences, especially since a lot of the statistical information within generative AI is unknown to most people right now.


Understanding Bias in AI for Pathology

So kind of familiarizing them with those terminologies. And then the fifth one, which is around regulatory aspects of AI. So people know what kind of a regulations that are both in the US [00:12:00] and outside, followed by the sixth one, which is around bias considerations and ethical aspects. And then finally finishing up with the machine learning operations and the future of AI with multi-agentic frameworks and how they're basically changing and gonna be changing completely transforming our landscape.

So that was the idea. So that. The baseline framework for your average reader is set, and then they can go from there to other papers and other literature and other resources to kind of upgrade their learning. I hope that explains it.

Aleks: Definitely. And I know, you know what I did with the series I don't know if Matt and Liron were listening to this knows.

So the moment it went out, I get the message on like in from Hooman, “Hey Alex, they're out there and here are all the links.” So I get the links to all seven papers, and I'm like. Fantastic. [00:13:00] I have this journal club series at 6:00 AM in the morning on Fridays. And I'm like, this is what I'm gonna be talking about.

And I actually like went through all of them, read them all, and explained them to people who are joining those live streams. Some of them. Just starting in the digital pathology space. And you know, I chat with them, I talk with them during the livestream. And so there were people who were just joining the computational pathology space, digital pathology space, people who like had no idea about the AI, but know that this is coming.

And that was such a fantastic resource for me to walk people through this topic. And, I'm gonna be doing more with this, so I'm gonna be incorporating it into a book that I've written and everything I could basically do with it. And what I wanted to thank you about and before I officially introduce [00:14:00] Matthew Hannah, and ask you the next question, but what I wanted to thank you for was.

The license you publish this under because, so this series is published in Modern Pathology, but it is published under the common attribution license, which is the most permissive model. You can publish under. So this, you can share it, the remix build work, commercial work, and like basically do everything with it as long as you say where it comes from.

So thank you so much. Because you basically have given a resource to the world that everybody can take and do their own version of it, with this at its core.

Hooman: Yeah. Our goal was truly democratizing it and making it accessible to everyone. And also kudos to George Netto the past editor in chief of a Modern Pathology [00:15:00] who actually was supportive of that and basically commissioned this series. I think he should be credited for that. And then also, we should credit also Catherine, you know, from the managing editor of Modern Pathology too, who basically has, was instrumental in terms of helping us put it in place and getting it. But yeah you're right.

That was we intended to, to make sure that it basically more and more people could actually have access to this.

Aleks: This is amazing. Matt, let's start with you now. It's so difficult to manage three guests in the box, but give a quick intro to you to those who don't know you yet and, your question is gonna be, because obviously, as you are writing the series, you are also using the AI tools that you talk about, the generative AI. I don't know if you use ChatGPT [00:16:00] or whatever tools, right? Whichever tools, and I know you use them for figures. And the question to you is gonna be how did you balance this AI assisted workflow that like accelerated you and the teamwork with all the co-authors who contributed to the series.

Matthew: Thank you. It's a pleasure to be here. I'm Matthew Hanna. I'm a pathologist at EPMC and particularly with CPAiCE, the director for AI operations. And my other hats at UPMC are the. Vice chair for Pathology Informatics, and it was really a pleasure to come into UPMC and have this be sort of one of the first big bangs.

But of course, it wasn't exclusive to only us at UPMC. We really wanted to have this series which was honestly a result of a lot of longstanding relationships, new connections [00:17:00] that were formed. A lot of us had this mutual interest in advancing AI and medicine and in pathology. Some of us had collaborated across institutions for many years, and I think bringing in our collective network was truly beneficial because it really just aligned with very specific angles that we wanted to include.


AI Literacy for Medical Students and Professionals

So we sort of organized the project around, whether we set up regular meetings, working documents. We assign clear sort of ownership for various topics and the diversity of perspectives from a lot of the other academic researchers and academic centers as well as of course, practicing clinicians as Dr. Matino had said. You know, we wanna sort of, walk the talk or walk the walk and talk the talk here so that we can, have practicing pathologists actually put out practical information in regards to this, to the masses. And of course, accessibility is, Dr. Rashidi [00:18:00] said is one of the projects, I think biggest focuses and strikes.

We sort of built in this time to all work together, and you know, hopefully it was very fruitful to be able to provide the sort of content that we did and in terms of drafting some of that content. Some of the images were generated using generative AI. And we wanted to sort of use that as a, as an out outline.

We, you know, use it as an outline or perhaps as a scaffolding tool. For outlines, brainstorming, generating ideas. But ultimately, all of the content that was fleshed out and drafted was human written with some support for the images that were using these generative AI tools. I think it was interesting to, to work with them as well as, of course, the content that's being drafted is related to AI and [00:19:00] seeing.

That makes in being able to use the tools and generating the content, I think was very rewarding.

Aleks: It doesn't sound like AI at all. And by now, you know, everybody who's using the tools knows, like, I don't know what the frequency of certain words increased when ChatGPT started being used.

Like everything is I don't know, like, I don't know. There's certain words and they're gonna come to me, but, but this doesn't sound like it, but you say you didn't use the text Ai’s for that. How did you do it so fast then? Because if I was writing it.

Matthew: Yeah we definitely didn't use the AI outputs verbatim, but it sort of gave us a heads out or outline, and we had a very aggressive timeline in wanting to publish the content.

And I'll also call out Dr. Netto and his group at Modern Pathology for keeping us honest with our timelines. [00:20:00] But it was a great team of people. Across you know, lots of various institutions that we were able to put this together and load balance to be able to actually publish all that amazing content.

So it was definitely an all hands on deck approach.

Aleks: You are super fast. And like the only way for this to be still relevant was to go so fast. So you guys achieved something unachievable in the publication world that was basically putting out seven seven. Like review, full review papers.


AI Regulations and Compliance in Medicine

And my like the one I was most impressed about, or maybe not most impressed, but the one that filled my largest knowledge gap was the regulatory one. I had no idea how many guidelines and like already regulatory frameworks are up there because [00:21:00] the, like slogan or the thing that everybody. Says about those technologies.

Oh, it's like, oh, regulators are lagging behind. They like don't know. Oh, yes, they know. And they put out regulations and all those regulations are listed in a table in one of your papers. So now I'm like, when somebody says regulators are lagging behind, I'm like, no. Go. Which it? Five or four? I dunno.

Which one. Yeah. Five or six. 

Hooman: Yeah. So five. Yeah, number five. But I have to say that one. Just to be objective. We have to credit on that particular one, both myself and Dr. Hanna. I have to credit Dr. Pantanowitz on that because… 

Aleks: Yes. 

Hooman: He did. He spent more time you know, reviewing, even bought books on various countries that are doing things.

So I initially, I'll be honest, initially I was like, really? Is this necessary Liron, you know, let's try [00:22:00] to expedite it. 

Aleks: I’m impress. 

Hooman: But no, it was, it worked out really great. It's just but credit to him that it, he, I mean, also he brought in a, you know, we, we brought in a fantastic group in addition to, you know, him driving it.

So that basically sets it aside that of something that, so more, you know, kudos for him to drive that as the lead.

Aleks: Liron, how did you, how do you feel as the new regulatory expert in the space up there doing all this research? Is this something you liked, or was it like nobody wanted to do this because I would not volunteer to do the regulatory fund for sure?

Liron:  Let's be honest. The two of them assigned it to me 'cause they didn't wanna do it.

It's probably one of the most important topics because you know, you can have a lot of fun building things and testing them out. But if you actually want to use them every day in routine practice so that patients benefit and the people in your [00:23:00] practice can benefit and give them the added value.

You really need to understand how to navigate that regulatory environment. And so I felt it was important for me to get up to speed with everything that had been created which is why Dr. Rashidi is right. I spent a lot of time, you know, investigating the regulatory environment, not just in the US because our target for these series was a global population of people interested in computational pathology, not just US-focused. And so that's why he’s right, I read about regulations in China, and EU, and even in Asia, so that I could you know, understand the regulatory environment there. But the key to putting that all together was to look for common threats.


Ethics and Trust in AI

What are all the regulators and policy makers actually trying to achieve when they're doing this? Which is it more than technical? Is there an ethical and social aspect to this that they would like to address? Are they trying to build trust with the end user and the patients [00:24:00] who will benefit from this?

And so I think, I'm glad that the article turned out to be, you know, readable and usable. Because we really looked at regulations around the world and conveyed that theme. And Dr. Rashidi is right, we assembled a really amazing team of people who have years of experience in regulation. In the lab environment, you know, and LDTs and compliance, and FDA.

It actually came together quicker and easier than we had anticipated.

Aleks: Yeah, I was really impressed. Also, so the other ones, who was the main driver for the ethics- ethical considerations?

Hooman: Matthew. Matthew did that. Dr. Hanna. 

Liron: Matthew is the most ethical among us, so that's why we assigned.


Understanding Bias in AI for Pathology

Aleks: He got the task. I loved both. The bias was bias and ethics together. No. 

Hooman: Yes. Right. 

Aleks: Yeah, because [00:25:00] I think this is an interesting topic, not only because, you know, we wanna be ethical, but it has a like this. Common meaning that everybody understands and it has a very specific meaning in the medical field, and it's actually a field of science.

The bioethics, biomedical ethics, and the same with biases. You're like, you have the common meaning of the world. Oh, something is biased. Somebody is biased, but then you have this super specific, meaning significance of, okay, why isn't our biased? What kind of sources of bias are contributing to that, and what it means.

And I also logged in that one that she always have like a mitigation strategy. Okay, this is where it comes from. This is what the risk is, and this is a mitigation strategy, [00:26:00] which I think is also one of the common threads in how regulators are looking at this topic. But Matt, tell me how did you, because I felt the transition was like it was clear.

How did you struggle with the, like these two, the common meaning and the scientific meaning? Did you. Like consciously transition from one another. And how was the experience of writing that one or leading that one?

Matthew: No it's fantastic and I'm glad it came across clear. You know, I think ethics and bias in AI aren't just theoretical issues.

They're very real, especially in medicine. When we deploy these AI tools in healthcare, we're not just automating tasks. They're making decisions that directly affect people's lives. So we sort of want to ask and sort of in writing the paper is wanting to ask. You know, how do we build a model?

What data is it trained on? How, where are ethics and bias? And more importantly, where might it fail? And so just [00:27:00] from the perspective of education, trying to put two and two together and saying, okay, well, let's educate and say where are there bias? And then how can you mitigate it? And. Bias can often enter AI systems through historical data, as we mentioned in the article data that may affect different diagnosis rates or even how pathology slides are captured and labeled.

And I mean, it's it could be pervasive through a lot of what we do. So if we don't address those upfront, we risk really encoding all of those biases with within our data, which will affect all of the outputs. And so at CPAiCE and our broader work, we're really focused on transparency, accountability, and inclusion.

And so we wanna make sure we understand how those models work who they may benefit or who they who they be at risk. And you know, just ethics isn't an afterthought. It's sort of part of the responsible AI development. And hopefully we included a lot of that in the paper from what people read.[00:28:00]


Ethics and Trust in AI

So we just wanna keep clinicians, patients, and these communities engaged in the conversation and educated about it because I think the trust in AI is earned, that we really shouldn't assume it.

Aleks: Yeah. Keywords educate and the trust is earned. And I think this particular one, like de- demonized the bias and the like, through education through this particular month. You basically make people aware of how it works, and when you know how it works, especially in the aspects where people are most afraid of, because the thing people are most afraid ofis  that it's gonna hurt somebody and. It can hurt through, you know, one sided data, different… different options.

The one, you know, how it works, you know, how to mitigate the risks that come with the AI tools, like with any other tools, right? [00:29:00] So I think you, you very much achieved this call of education and earning trust in AI because now it's so much clear how it works and where you need to pay attention. Fantastic.

I'm just impressed with this series. I love it. I'm like building a course on it as well and and spreading the word. So what's next, guys? For each one of you, do you have like next publications you're gonna write next tasks that you're responsible for at CPAiCE or in general, like what do you wanna do next in your digital pathology?

And computational pathology life at UPMC and CPAiCE.

Hooman: So I'll take that. So I I think that's a really good question. And this is kind of in tune with what both Matthew and Liron had this, you know, had mentioned [00:30:00] earlier, which was we've built this as if you will, a building block of various things that we have within CPAiCE and, you know, University of Pittsburgh.

In terms of our mission, in terms of how do we incorporate computational pathology and AI within our framework? And so that actually expedites research innovation and education. So checks off the academic missions, like what Liron was saying. But but just as importantly, you know, our duty as practitioners and scientists to also propagate the knowledge to the next generation who's gonna be affected by these technologies? So this series is just a building block of various other tools that we are building and coalitions that we're building with other centers that enables us to pass [00:31:00] on the knowledge that we're gaining from, you know.

The content, but also about center building that's required for typical pathology departments to have. That's one thing that I think all three of us are so a hundred percent aligned on, which is we feel a hundred percent strongly that the need for computational pathology and AI is not just, you know.

A nice want. It's an absolute need. And the more and more these AI tools are being incorporated, the more different departments need to carve out resources to set up similar centers to what we've set up CPAiCE site. So this would be our way to leave a mark, hopefully to, you know, enable more pathology lab medicine departments to.


AI’s Role in Improving Patient Care

Incorporate [00:32:00] computational pathology and AI into their workflow landscape because we honestly feel that without it you could be compromising patient care in the future. And even now, starting now, if you don't, if you don't, basically don't have the right tool stats and the right people at the table. So that's really in my head in terms of what.

What I envision, you know, what these types of activities, resources, the center activities, and what we bring out is gonna contribute to us locally in our pit environment, but also just as importantly, globally to the entire global community. 

Liron: Yeah, can I'd like to add to that. Number one, I think it's very unique that the three of us together are we able to create this.

So it's, you know, one plus one equals much more than three. We have an amazing synergy. 

Aleks: Seven, right? [00:33:00] Seven part series.

Hooman: One, yeah.

Liron: Exactly. You know, I think there's a lot more to come from us. We are growing our team, where we continue to hire faculty, software engineers, data scientists, et cetera.

So our team's growing, which is great to see our success. We are making a lot of tools and we are already started to test and use some of those tools in our own environment. So watch out for that space. CPAiCE will definitely add to your space. We are putting on lots of courses as Dr.

Rashidi said. And what I think is very exciting is some of these courses that he's helping create are for medical students. And I think for the first time we may change the pipeline of medical students coming into pathology because now they are looking at a career, not in the basement in some smelly morgue, but they're looking at a career where.


Upcoming CPAiCE Textbook on Computational Pathology

They can use AI [00:34:00] to practice medicine. And so that's pretty cool. And heads up that we are currently working on our first CPAiCE textbook, and so soon you'll be able to expect a textbook from us

Aleks: Amazing. How has to review it on the channel. Matt, anything, any last words of what's next for you?

Matthew: I'll just say, you know, we were so excited to put out the seven-part series, and I think for us next steps also include taking it from research and review papers into practice.

And the goal is to actually be developing these tools as Dr. Pantanowitz had said, and use them in clinical practice. And part of my role here is also doing that clinical translation. And, you know, I'm excited to say we we've already started on that journey, and a lot of exciting things to come.

Aleks: Amazing. My goal for this podcast is basically to send people to the series, whoever is entering this space. [00:35:00]

This is definitely the resource for them to go through. And then, so something that I think is a superpower of this series is basically you go through it with understanding. And you can then contribute to the digital pathology AI space with your background, because this is a very multidisciplinary field.

But if you don't have like this baseline, this common understanding of the concepts, of the topics that you guys covered in this series, then you spend like a year listening to meetings and figuring out what it is about where can you plug your expertise in. Whereas with this. Like, I think Liron, you said this is a scaffolding.

This is like, this gives you a frame, you know, where your expertise comes in as a pathologist, as a computer scientist, as a clinician whoever else [00:36:00] is part of the team. And you have the common language to speak to all the other members of the computational pathology team. So that's something.

That, that's why I wanna push this series to everybody who is even interested.

Hooman: That's awesome. Thank you so much.

Liron: Perfect. Well thank you for including the three of us. Hopefully you are able to manage this. It seemed to work out well. And maybe we set a precedent for you to include more than one.

Aleks: Even more, maybe I need to visit you in person like we were discussing, but we will definitely meet each other at a conference, which is your next conference one. Are you ever going, all of you to the same one? 

Hooman: Yes. We… 

Aleks: Or do you split? 

Hooman: We have missions some. I think the one that pretty much all of us go to is USCAP.

Like all three of us typically already USCAP [00:37:00]. We just came out of there. I mean, Liron and I are going to the European Digital Pathology Society meeting. So that's, but yeah, so I'm sure we'll see each other.

Aleks: Okay. Then when we see each other, all four of us in one place, we have to do a live version of that.

Thank you so much for joining me. Thank you for taking the time from your busy schedule. It took us like half a year to schedule this and thanks so much for writing the serious.

Hooman: Thank you. Thanks for being a big fan and yeah, big advocate. 

Liron: Thank you so much.

Matthew: Thank you for your support. 

Aleks: Have a wonderful rest of your day.