Digital Pathology Podcast

134: AI Trust Issues, Challenges, and Multimodal Insights in Pathology with Hamid R. Tizhoosh, PhD

Aleksandra Zuraw, DVM, PhD

Send us a text


In this episode, I’m joined by Dr. Hamid Tizhoosh, professor of biomedical informatics at the Mayo Clinic, to unravel what’s truly holding back AI in healthcare, especially pathology. 

From the myths of general-purpose foundation models to the missing link of data availability, this conversation explores the technical and ethical realities of deploying AI that’s accurate, consistent, lean, fast, and robust.

📌 Topics We Cover

  • [00:01:00] The five essential qualities AI must meet to be usable
  • [00:04:00] Why foundation models often fail in histopathology
  • [00:08:00] What “graceful failure” looks like in AI for diagnostics
  • [00:13:00] The problem with data silos and missing clinical records
  • [00:22:00] Why specialization in AI models is non-negotiable
  • [00:34:00] The role of Retrieval Augmented Generation (RAG)
  • [00:43:00] How transformer models broke away from brain mimicry
  • [00:50:00] Academic dishonesty, publication pressure & bias
  • [01:04:00] Decentralized AI and why it won’t solve big problems
  • [01:12:00] Data diversity, disparity, and the realities of healthcare bias

🔍 If you’ve ever wondered why AI tools stall in real-world pathology labs, this episode breaks it down with honesty, clarity, and vision.

THIS EPISODE’S RESOURCES:


#DigitalPathology #AIinMedicine #ClinicalAI #PathologyInnovation #BiasInAI

Support the show

Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!

AI Trust Issues, Challenges, and Multimodal Insights in Pathology with Hamid R. Tizhoosh


Introduction to Healthcare Problems

Hamid: [00:00:00] We have to start from problems. What is the problems that we wanna solve? Fundamentally, when we talk about healthcare, we wanna solve three major problems in biomedical informatics, and AI is a subset of that. We want to look at triaging diagnosis, treatment, if we optimize, triaging, diagnosis, treatment.

So you have a lot of improvement in the healthcare, and to do that, there are requirements. Whatever solution you come up, doesn't matter what is the name of that technology. So you have to be accurate, you have to be consistent, you have to be fast, you have to be lean, you have to be robust. Most publication, most force focus on one or two of this.

This is one of the things that is rarely tested for academic research, that things do not fail. So when I call your software, it should respond. It should always respond. And if it cannot respond accurately, consistently, fast, and in a lean manner and it fails, it has to fail gracefully. [00:01:00]

Aleks: The views and opinions expressed in this video are solely those of the interviewee and do not represent or reflect the views, policies, or positions of any organization, institution, or employer with which they're affiliated.  The interviewee is speaking in their personal and professional capacity.

Welcome and Guest Introduction

Aleks: Welcome my digital pathology trailblazers. Today I have an amazing guest, Hamid Tizhoosh. You probably have seen him on podcasts, on videos recently, Hamid. I came across your lecture from the API Conference Association for Pathology Informatics this year, or I don't know, it was probably last year.

As well already the foundation models were booming and so I needed to learn more about this and you had a fantastic presentation. Hamid is the professor of biomedical informatics at Mayo Clinic. Hamid, welcome to the show. I'm so happy to have you. 

Hamid: Thank you very much for having me. [00:02:00]

Aleks: It's not our first conversation, but I think it's the first time you're on my podcast, right?

Hamid: Yes. 

Aleks: So I am super honored and we always start with the guests. So let the digital pathology trailblazer know about you. Who are you and why you are so awesome to be on my podcast? Because you are.

Hamid: That's a good question. I am faculty member doing research. I did my education back in Germany and then started with medical imaging, radiation therapy, cancer related stuff.

Did move from Germany to Canada and then started working on analysis of radiology images for the first probably 15 years at the University of Waterloo. Before that, I was… 

Aleks: When did you switch to pathology? 

Hamid: I got sick and tired of black and white images. I wanna see color. Color has meaning color is physical, has a connection to the creation of the universe.

And color is beautiful.

Aleks: See, whenever somebody is gonna ask me [00:03:00] “Why is pathology better?” - I'm like, it's colorful. Why is colorful photography better? Why is the colorful TV better than black and white?

Hamid: Also that pathology is more definitive. It's more decisive is more, is less subjective supposed to be because it's for many cases is a gold standard. And I want it to be attached to that certainty.

Aleks: Yeah, definitely. It's the basis for treatment. Whereas radiology exams, a lot of them lead to pathology exam. They're more like on the screening end of the spectrum.

AI in Healthcare: Challenges and Requirements

Aleks: And so today I wanna talk about the good, the bad, and the ugly of AI, including foundation models and all the new combination models, transformer models, and whatever is happening in the AI space.

Or happened this year in 2024. And you know your opinions on that. I wanna go beyond what we're seeing in press releases, what we're seeing in publications. [00:04:00] And my first question is obviously the adoption has not increased significantly since the beginning of the year, so I wanted to start with the key barriers to widespread adoption in the real world.

Hamid: It is a very difficult question because you have to build a little bit to get. To that point, we have to start from problems. What is the problems that we wanna solve? And fundamentally, when we talk about healthcare, we wanna solve three major problems in biomedical informatics. And AI is a subset of that.

We wanna look at triaging, diagnosis, treatment, we wanna respect to this three major ones, and you can find other things and add it. But if we optimize triaging, diagnosis, treatment, so you have a lot of improvement in the healthcare. And to do that, there are requirements. Whatever solution you come up, doesn't matter what is the name of that technology.

So you have to be accurate, you have to be consistent, you have to be fast, you have to be lean, you have to be robust. So five computer science [00:05:00] information, theoretical requirements that you have to fulfill and just to. Allude to a little bit to what you said with respect to publication and delivering of promises.

Most publication, most works focus on one or two of this. 

There are not many works have, try to access all of it. Accuracy, consistency, speed being lean, not being memory hungry and robust. So you

Aleks: guys have the five parameters in the business world, you're gonna be, you're gonna have high quality and fast or fast and cheap.

But anyway, there is like the combination of three, you never get three. You have to pick two. Yeah.

Hamid: I would say that's also industrial because if you are accurate only occasionally, that doesn't help. So you have to be consistently accurate. You have to be fast because we cannot wait days and hours for the response of the computer to come.

You have to be linked. This is [00:06:00] one of the, is the single most, neglected factor in academic research, especially from universities and research hospitals, that solutions are provided and they are extremely memory hungry.

Aleks: So spoiler alert, the foundation models are not lean.

Hamid: No, they're not lean. And that's one of the major problems because they violate multiple principles among other Occam’s razor and no free launch theorem.

So we may get to that, and they have to be robust. So they have to be the, what we call graceful degradation. So they have to be reliable. They cannot fail. And this is one of the things that is rarely tested for academic research, that things do not fail. So when I call your software, it should respond.

It should always respond. And if it cannot respond accurately, consistently, fast, and in a lean manner, and it fails, it has to fail gracefully.

Aleks: What does that mean? Fail gracefully. [00:07:00]

Hamid: Imagine in the middle of the surgery, something happens. So it has to be something that we can manage. 

Aleks: Okay. 

Hamid: And save the situation. It cannot just an airplane that crashes and then you cannot do anything about it. And when you go in individual examples, it becomes more and more clear. It's very important to keep this overall picture, the big picture, because when we go in inside individual technologies for digital pathology. You may get lost in details.

Aleks: Question, one question to this failing gracefully. So for example, if an AI algorithm, let's say a specific one for showing malignancy in breast cancer, when there is like an error or something, instead of giving your result, it would give an error message. Would that be a graceful failure?

Or how, give me an example of a graceful failure of an AI algorithm used for diagnostics.

Hamid: Let's not talk about a really drastic cases like surgery [00:08:00] which should be obvious, let's say less drastic. I'm doing information, you, I'm searching for similar patient, which is my field.

And if I send my, the tissue image of my patient and say, do we have a similar case?

Robust is you always come back and tell me something and say, I found these three patients that are similar to your patient, and you summarize it.

If you cannot find something, you have to come back and let me know that. I was not able to find something and also tell you the reason. So we, we did recently, a and a study has been published in review of biomedical engineering, and we looked at all search engines.

Some of them, they failed in 50% of the cases, which means we give an image, go find similar cases. They don't come back. They don't come back.

Aleks: Oh. So we're just like waiting for nothing

Hamid: what they crash. 

Aleks: Okay. 

Hamid: It could be crashing for really simple reasons, but you don't come back. So that's being robust.

Aleks: Okay. So this happens twice and you stop using the tool.  [00:09:00]

Hamid: Yeah. So you cannot let the physicians having 20 patients or 20 cases to work on during the day. You cannot just let the physician hanging and say what happened? So most of the time it's being ignored. Now the problems triaging, diagnosis, treatment requirements, accurate, consistency, speed, lean, and robust.

How do we do that? The maxim is we have to use evidence.

What is the evidence in medicine? Images, lab data and historic data. Historic data is the manifestation, the quantification of experience. But if we ignore the historic data, which we don't want to do it, historic data is how the experience of really highly qualified, highly knowledgeable physicians, pathologists have been recorded.

And you are not using it at the moment. So if we ignore that, so images and lab data. Blood, urine, molecular data, genetics and all that. So that's evidence.

The Importance of Evidence in AI

Hamid: If you [00:10:00] wanna develop a solution for diagnosis, and it has to be accurate, consistent, fast, lean, robust, then you have to use evidence and you have to be able to provide evidence to back it up, whatever you say.

As a computer program. That's at the moment a major issue in AI that we cannot back things up. There are solutions or the schemes of solutions, but we are not doing it per se.

Aleks: What can't we back up? Like give me an example. Easily understandable. Can be from real life as well. What would be not being able to back up something with evidence when you're developing AI on to solve a problem.

Hamid: If I go our pathologist sitting in the building right beside me, if I go and ask them, and I, we have experts for all type of sites and I ask our breast pathologist expert, what is this? And she tells me this is popular carcinoma. [00:11:00] And then ask why she explains it. And as an engineer, I don't understand most of it, but she can explain it.

So she can back it up. What is the backup here is her experience is his medic. Medical knowledge and anatomical knowledge, experience. When we talk about computers have to back it up with something because computers are not physicians.

Aleks: So does this tie into the concept of explainability of those models?

Hamid: It's more than that. Yes. Explainability, but it's more than that is the precondition and the prerequisite that we can use all these things that among others with large language models are going on. If we cannot back it up, large language models will not make it to the practice of medicine.

And they have not made it to the practice of medicine as we speak, right now, we use it for really very… 

Aleks: consumer type of… 

Hamid: Yeah. So it's not really, is not the bad side side of the patient, so we are not using it. So with these three things, [00:12:00] so now we get into. Major problems of AI.

Current State of AI in Medicine

Hamid: And AI has put many things forward, foundation models, large models, large language models and so on.

And we see a lot of problems, first of all. So start with the clinical side, and I'm going to say something sounds strange. The data is not available yet. I'm not aware of any major healthcare institution, at least in the United States, that the data is available. What does that mean? That means we have LIS, we have PACS, we have histopathology data.

We have molecular data. They are not in one place, in an organized way, in a tabular way that we can go and look at the longitudinal data for a patient. They are not there.

So the data that would be necessary to do all those things that I mentioned is not available. So what are we doing right now in the AI community?

Aleks: Yeah. What are we doing? Everybody's… 

Hamid: I don't wanna insult anybody, but we [00:13:00] are playing in the sun.

Aleks: Let's talk in concept, but because I do wanna touch on concepts and, whoever is studying the literature is doing it. But, and that ties into the next concept because all these things are known, but none of it is published.

That ties into the next point I wanted to talk about is that okay, these things are not rocket science. They may be new, but you mentioned like principles from computer science, from AI that are known to people who are doing this. Is this being published? I assume not because I would see those publications in my PubMed feed which I've been doing for the last half a year.

Like weekly abstract reviews. There's nothing about any negative results. And that was my experience from, I don't know over 10 years ago when I was doing my PhD project yielded negative results. I was struggling so much to publish it. And, that's at the very basic level. [00:14:00] It wasn't my fault that it was negative but nobody wanted to talk about it because who's gonna read it?

And yet we do need to communicate it somehow. So what are we doing?

Challenges with Foundation Models

Hamid: AI has, AI research in healthcare has and maybe beyond that has four major issues right now. Four major challenges. And as you pointed out to that, so when we look at some of these challenges and some of us. Look at cases and go into it and you wanna publish the weak points.

There is, there's not much willingness in the publication world to publish negative results validation reports. And it's very difficult when platforms publish results that on the surface appear breakthrough. And then all of us, we get excited and we when foundation models get published in histopathology, we work through weekends to see what can we get out of it is very serious business. [00:15:00]

We are at the institution that the maxim is, the patient needs come first. So if there is something that can help us to do a faster, better diagnosis, better treatment, be all very serious about it, we wanna do it. So we take it seriously because it's published in very high impact, very high reputational places.

And then we test it. Then it falls apart. And then we think others may be interested how we test it. Because we have to confess, we test really in a tough way. We don't really give any way for algorithms to escape. We say, okay, we take the toughest cases and say, okay, if I really bring it in a clinical setting and I have to use it this way, what happens?

And then the last one that we did foundation models for histopathology, average accuracy on TCGA was 44%. We did zero shot, which means we did not touch them. We did not fine tune them. And I'm sure everybody who publishes the foundational model said no. You have to fine tune it. [00:16:00]

Aleks: Then you, that's a foundational model, right?

Hamid: Yeah. And if I wanna find you, then I have many other possibilities Who says the fine tune foundation model would be the best I have to go And also find you many other smaller, leaner models that are more manageable. And we tested with a detailed subtyping, not tumor type subtyping, much more difficult.

We test that with highly imbalanced data, which is the practical, the clinical reality. So the 44% on average across foundation that, and that's an interesting observation. Didn't matter which foundation model we tested. On average, we came to the same number, 44%, which is consistent with what we call in computer science, no free launch theory.

Tell me about that. There is no free launch, so if you wanna be better than others, as an algorithm, you have to specialize. If you don't specialize on average, you [00:17:00] are the same level as everybody else. You will not be the best model, you will not be the best algorithm. Which means no free launch theorem in combination with Ocamm's razor.

Keep it small, keep it simple. Would negate the concept of foundation model. 

Aleks: Foundation model.

Hamid: Really large. If they are really large and they cannot deliver, maybe if we at some point have the multimodal, heterogeneous data longitudinal for millions of patients, and then we do maybe we break through that.

At the moment, that's not the case. So if we. If you look at this and say, okay, we cannot get there at the moment because on average we are the same because we are working on general purpose histopathology foundation model. What does that mean? That means you are,

Aleks: That is my question. I could not find it because I, what I was looking in the publications [00:18:00] I read, like what does this model actually do?

We are used to the specialized one. Okay. It shows malignancy, it's segment cells. It does that. I was looking for a list, a table. What are all the things that this model does? I could not find it in any of the publications.

Hamid: The fact if a model does not take any action or make any decision, that's fine.

That's a different issue because that means it is trying to understand the data. And then you can capitalize on that understanding for diagnostic purposes, for treatment purposes, for anything you want. That's not the problem because they are usually trained in a self supervised manner. We don't have annotated data.

We come up with some made up fake task for the network to understand the data, detached from any task, including diagnosis, treatment, planning, survival, prediction, anything like that.

Aleks: But then I don't understand what they [00:19:00] do. They go and understand data. What is the output of the model?

Hamid:  Okay very simple thing.

I take a tissue patch, a tie, not whole slide image. Of course. I take a tissue patch and give it to the network and then rotate at 90 degrees, give it to the network again and say, look, this is the same. Of course if you rotate the tissue image for the pathologist, the same we realize nothing changes for the computer.

It'll generate different numbers. So now we force the network to generate the same numbers if I have a tissue and I rotate it. So in order to do that, when we do it with millions and millions of tissue patches, the network will understand the texture, the pattern of the tissue, without knowing this is papillary carcinoma, or this is adenosis, or this is adenocarcinoma.

It doesn't know what that is. It just says, I see a pattern, and then you [00:20:00] rotate it. And I figure out, oh, that's the same pattern.

That’s the same pattern. That's all it figures out. But in order to figure that out, it has to understand the complexity of that pattern, which is the magnificent idea of self-supervision is one of the major things that AI community has put forward.

Amazing. That we don't need now annotated data. Without self-supervision, there would be no large language models, no foundation models, no nothing, regardless of their actual applicability and usability.

Aleks: That is my next question. Applicability and usability.

Hamid: Yeah. Why usability? We get to the point that, okay, now you say it's general purpose, that means for the entire histopathology, so it understand brain and prostate and kidney and breast and everything.

It understand so that to begin with, goes against what we are doing in pathology. Because in pathology, everybody is specialized again. No… 

Aleks: In the US. [00:21:00] Because it's, and it's actually either US and Canada or one of the only countries where you have such a very, super specific narrow specialization of pathology and all the other countries.

It's gen… you're a pathologist.

Hamid: Look here we get last number. I have, we get 50, 55,000 consult cases. 

Across US and internationally coming to the department. 

Aleks: And I'm not saying that this is not how it should be because the nuances of the disciplines basically cause the high specialization in the US and the wider second consult.

Second consult. And why you consult a specialist. And I just wanted to let, and you know that it's not the case in other countries.

Hamid: Let's take breast cancer. So 80% of them on lobular or duct carcinoma. Most pathologists with a little bit of training can look at it and say, that's ductal, that's lobular.

If you look at the remaining 20%, [00:22:00] the remaining 20% are highly rare, complex cases. For that 20%. You need highly specialized breast pathologist, and that's why we have consult. That's why the 80% can relatively easily be diagnosed in most hospitals across the planet. But then for the 20% you need specialists.

Aleks: So where do you use AI as leverage? Where? Where does it make sense?

Hamid: General purpose means you wanna do those easy cases, so you wanna do only ductal and lobular, and that's why it falls apart. When we test that with detailed subtyping of TCGA data that you go beyond the tumor type and you go in detailed subtyping, which is much more valuable and useful for a treatment planning, then it falls apart because of course it cannot do it because it is general purpose.

Then you have to do something. You have to fine tune it. You have to train another classic. Oh, okay. I thought [00:23:00] foundation model will solve all my problems. No, so it's a philosophy now that we say the major issues with AI. So that, so I'm departing a little bit. Deviating a little bit from the big picture.

But fundamentally, one thing is that you cannot build on general purpose AI for medicine. That's at the moment, based on everything I have seen. That's my way of thinking, that if you wanna bring AI to the bedside of patients, we have to specialize. Okay? So that means you have to have a model for breast, one for brain, one for prostate researchers won't like it because if I have a journal purpose model, 5,000 people will download my model.

So what if I work on prostate? Probably only 200 will downloads.

Aleks: So Hamid, when you say, oh the general will do the easy cases for me, when you say that, I'm like, okay, then it's not a help for a pathologist [00:24:00] because pathologists can do it without clicking a mouse. 

Hamid: It could be Alex, because imagine so in developing countries if you talk about countries that have no pathologist,

Aleks: Yes, that's what I'm saying. It's not gonna be a help for a pathologist. It's gonna be a he a help for a healthcare professional that needs to then figure out how to get that to a pathologist and then the next pathologist is gonna have to figure out, okay, do I need a specialist or can I deal with it? So it's like that would be the triage of triage application of this for non pathologists.

Healthcare systems or healthcare. 

Hamid: But this look confusing, so I have, I've been trying it, trying to figure it out for myself as an individual researcher. 

Aleks: Me too. 

Hamid: So we throw all that data on a gigantic model and then train it with a lot of carbon footprint, and then we wanna solve the easy problems.

So I thought AI wanted help us to solve [00:25:00] difficult problems. So that's one, one issue. Going back to the big picture. So the major issues with AI at the moment, AI taking any model, especially foundation models, large language models, they are not addressing those five accurate, consistent, fast, lean, robust.

They are not doing that. So there are, you can list many problems that they are deviating. They try barely to address one of them, or maybe one and a half or maybe two, but never five. So we can analog as we don't have those five, we cannot get to the bedside of the patient. That's one problem.

Foundation models, large AI models are failing.

Nobody likes to hear that. I don't like to hear that. I have made my… 

Aleks: No, I was hoping they're gonna succeed and they're gonna solve our lives. 

Hamid: I want, I wanna stay enthusiastic and we will stay enthusiastic, but that doesn't mean we have to be blind.

Aleks: Yeah. We have to, I, you know what, a tough balance, [00:26:00] I have to say because… 

Hamid: I know. I know. 

Aleks: I am always super enthusiastic about all those new technologies and then I'm like, like you say you said, oh, where are the validation reports? I'm like, that's what's needed to take it anywhere close to the patient, because that's gonna be the first time you have, first thing you need to do in any lab, under any type of regulatory constraints.

And. Then it's just like talking about technology, which is nice. But the question is when is it gonna be applicable?

Hamid: If the first problem is that you are not addressing those five conditions? Because the second problem is the data is not available yet. And it sounds very strange because most hospitals are, that have large amount of data, we are migrating, combining them in the same place most of the time's cloud.

That creates a lot of delay. So we have to solve that problem. That problem is outside of AI community, that the data [00:27:00] is not available. AI researchers can complain about it. I have been complaining about it. The data is not available. If you give me better data, more data gotta do a better job, we'll do a fantastic job.

We'll, we will see how it plays most likely the next two years, we will close those gaps and then data is unified somewhere that you can operate on it. And then we will see what we can do with it. The third problem, the third major issue with AI is this out of control enthusiasm for large language models.

Why I'm saying that I use large language models all the time. 

Aleks: I use them all the time too. 

Hamid: Editing for this, for summarization, I do that all the time. So I'm talking about medicine. Language is not evidence.

We said evidence is imaging is lab data and maybe historic data. Language is not evidence.

Language [00:28:00] is subject to variability and subjectivity. We cannot build the future of medicine based on language. That's a fundamental problem.

Aleks: So question here let's take an example of language used in medicine would be a pathology report. Do you say this is not objective because the evaluation of the data that is subjective is objective.

Sorry. The other way around. The objective data. The image is the image. It's gonna always be like that. But then you have an interpretation by a pathologist, and we all know that if we get oh 0.7 concordance, we're fantastic which means 30%, at least we're not that fantastic. So then it gets translated into subjective language, which is the report.

Is that what you're referring to? Or am I misunderstanding?

Hamid: No just, go from the other side. Look at it. 

Aleks: Okay. 

Hamid: There will be no pathology report or radiology report without the tissue, without the [00:29:00] patient. So the pathologist is looking at something. Using his, her brain with entire medical knowledge in there summarizing it in some words, which is subject to variability.

So if you wanna really capitalize on the information and knowledge in that report, you have to bridge it back to the evidence, which is the image, which is RNA sequencing, which is X-ray image, which is blood data, whatever the data is that was involved in that in, I'll give you an example from dermatology.

We were looking at skin cancer. We wanted to look at highly differentiated, poorly differentiated type of ous squamous cell carcinoma. And after a while we realize, oh, the tissue image is not the only source of information that dermatologists get. So what else he or she getting? They look among others at the medical photo that is taken from the lesion on the skin.

Prior to the biopsy. This [00:30:00] is a lot of information. You are not giving me that information. How do you expect me that…. 

Aleks: It's the same for like glass images from…

Hamid: Whatever comes in that report is the result of several other modalities and the general medical knowledge in the head of the pathologist. So that report in itself, you cannot detach it and then work with that data and get, you can get some statistics, understand some experiments.

But based on that, I cannot come up with, I cannot come up with next generation diagnostic treatment survival type of thing.

Unless again, the bridge to the evidence is restored such that we know this is the variability, this is the evidence, and so on.

Aleks: Question. Because that's something you were talking a lot about in the talk.

I'm gonna link to the talk from the API was the retrieval augmented generation. How does that play into the explainability, into incorporating text [00:31:00] into the multimodality? Because when I learned about, I'm like, where is it? Why can't we use it? It's like points you to where the information is from, and I thought that's gonna be the next super hype.

Never never happened. Didn't happen in 2024. Yet

Hamid: you just read my mind because…

Aleks: I just listened to your talk...

Hamid: No, that the two problems that we talked, let's skip the data is not available. All that is AI researcher. Life of hospitals, lack of money, heterogeneity of archives landscape of corporations many reasons.

So the first problem was we are not accurate. We are not fast, we are not lean, we are not robust. The second problem was language. 

Aleks: We're so bad. 

Hamid: Yeah, we're so bad. A language is not evidence. What is evidence? Imaging lab, which means you have to go multimodal. So at the moment, there is not a single, truly multimodal system that [00:32:00] has been trained with high quality clinical data and has been validated at multiple sites.

There is no such, most likely they'll do it in the next I, when I say VI mean the research community at large. Most likely we will do it in the next two, three years. Okay. When that data is available yes, I can go and get 50,000 cases and then get a little bit of the x-ray, a little bit of reports, a little bit of demographic, and do something.

This is all sandbox playing experimentation, which is necessary. We are, you are warming up.

But I'm talking about serious research that leads to products that can be really used in the practice. There is no say, so we need, I'm gonna quote this

Aleks: one serious research that leads to products, because life is like a funnel.

You're gonna have so many publications, and then only, it's like drug development. I work in drug development. You have so many candidates. One is making it into the, no, five are into the clinical trials. [00:33:00] One is actually making it to the shelves of the pharmacy. So serious research that leads to products. Lets continue…


Hamid: So the multimodality, which is evidence, brings in the necessary information and knowledge to compensate for lack of accuracy, consistency, reliability, but then speed and lean. Being lean has to be engineeringly managed. That's a design question under robust. So the first two, accuracy and consistency is a matter of evidence.

So we have to bring in multimodality to if you look at, let's say breast cancer. So you need the X-ray images, you need the MRI images, you probably have ultrasound, you have the tissue sample, you have the patient demographic, you have some genetic information, all that goes into a system. Then you connect it to a large language model.

And now we get to the magical wood that you mentioned.

RAG.

So now if you do even multimodal large language models or la… language vision [00:34:00] models. Still, you cannot deploy to the bedside.

Because one, one key word is the explainability, which is another word for is source attribution, backing it up.

Aleks: Yes. 

Hamid: So how do you back it up? Because again, you can give 10 million patients to the large language models that do multimodal, understands X-ray, understands tissue, all that. But then I ask, how do you know that? You have to back it up. You have to tell me why you are saying… 

Aleks: Because I saw this kind of image in radiology then hyper eosinophilic part of the tumor in whatever.

Okay. So basically as if a pathologist would tell you, why do you diagnose this as a squamous cell carcinoma? Because I see this, and these are the attribution attributes of a squamous cell carcinoma.

Hamid: But we trust a human being because according to the tooling. We still have nothing but the human expert [00:35:00} to judge the performance of the AI.

We don't have anything better. Is the human expert, the pathologist in our case, who tells us right, wrong, correct, incorrect. So the retrieval, augmented generation is now what we are seeing and is emerging very slowly is a renaissance of information retrieval.

Information retrieval in medicine was very limited and is still limited to just searching text.

You type text or you select some boxes and you search in an archive. We cannot search for x-ray images, we cannot search for tissue images. We cannot search for RNA sequencing. We cannot search for social determinants in combination with this and so on. So we do not have a multimodal information retrieval system.

Now we are in a pickle. Now we are in a very tough spot because we know. Large language models with all the capabilities cannot be used [00:36:00] unless we go multimodal, and then we have to back it up with evidence. And the evidence has to be outside of the AI, which is very interesting. AI can be backed up and explained with an information outside of the AI, not the information that AI has digested.

So because that information that is digested goes into the black box, we don't have access to it, we don't understand it. We cannot justify it. So we need an outside archive of knowledge. Reli… reliable knowledge, which is high quality clinical data, hopefully. Heterogeneous, hopefully free from bias, which bias will be there.

We have to deal. So now if we wanna do that, do we have multimodal information retrieval system? No, we don't.

Aleks: No. But now it makes sense to me why you might need a general foundation model. Because if it understands stuff in the image, then that's gonna be the deliverable from the model [00:37:00] showing me where, whichever information is, without telling me what to do with this information.

Hamid: Absolutely.

Aleks: So now I start understanding

Hamid: At the moment, so one and a half years ago, we had the possibility to start doing a foundational training, a foundational model. And I realized we, without retrieval, we cannot really check it. So we stopped. We stopped and we worked on developing a platform for information retrieval.

We still working on it, and now we are. Truly multimodal for breast. You have six, seven different modality, for example. And you realize, oh, we cannot be general purpose. You have to specialize. If you really wanna solve the tough problem, you have to specialize. It's a very painful decision to specialize

Because again, if I have a general purpose model, 10,000 people will download it.

If I have one for prostate, probably only 200 will download. It'll affect [00:38:00] my life as a researcher. So that's a problem, that's a limitation that we have to deal with. But again the patient needs come first. It's not about me. It is about how can we bring AI to the bedside of the patient in a reliable manner.

Now this emerging renaissance of information retrieval is everywhere. Not just in large language model, it's also in the a genetic AI. So AI based on agents. So…

Aleks: Yes. The agent. Yes.

Hamid: Which is a field I'm not working on. But the core of that is also not possible without retrieval.

Aleks: Without retrieval, because basically you have the orchestration of data retrieval from different places, which I'm of course super excited about. And I'm like waiting for this to be available. And I'm like, where is it? How long will I have to wait?

Hamid: So not only, we have some, we find out, so we don't have multimodal information. [00:39:00]

Retrie, we need to be able to deploy large language models and foundation models. But we also find out that AI cannot be used in a reliable way unless it is connected to good old fashioned information retrieval. Why is that? So that should make us think about. What does that say about the state of artificial intelligence in general that AI needs outside help?

So now this brings us to the biggest headache I have right now, just as a researcher thinking about things.

Future Directions and Limitations of AI

Hamid: That is, because we departed from historic origins of AI when we introduced transformers.

Aleks: Yes. I didn't know that, but that was the breakthrough. 

Hamid: It was…It is the breakthrough, but it deviates and goes in a different direction [00:40:00] from the initial ambitions of AI.

What was the initial ambitions of AI was to mimic the human brain, the way that human brain works. And the breakthrough came when we looked at the cat vision and convolutional neural network came. Even Boltzman machines, restricted Boltzman machines, auto encoders, all of that you see still. The traces of that initial ambition that we want to imitate human brain.

And he said, look, do not extract attributes and features from the data like we did prior to 2000. Give the raw data to the network. The network will figure it out. The network will extract features. Do not do manual design before, before 2000. We would, what we would do before deep learning. 

Aleks: Yeah. Though handcrafted features, defining thresholds defining what you wanna analyze, basically.

Hamid: And AI community was telling everybody, among [00:41:00] others, myself, as a young researcher, that you cannot do that because when you manually design things, your subjectivity is in it, your knowledge is limited. Whatever you do is will be limited. You cannot understand the data in its entirety. Tissue is a very complex type of data.

Okay. Then we said, don't do it. Manual design is not good. Give enough data. AI will figure it out. Then by 2017, we deviated from this in a historic departure, from the initial ambitions of artificial intelligence, we introduced transformers. Transformers are highly manually designed. And there is nothing in them.

You can say, okay, this and this, but there's nothing in transformers that you say, oh, thi this is like sim human brain. Transformers are fundamentally exhaustive. Correlation analysis. I look at the relationship between words [00:42:00] and between image parts, and then figure out, this is important. This word is important.

This part of the image is important. We call it attention. I think, and I'm sure many colleagues disagree with me, I think that attention is a misnomer. It's a correlation. It's not attention. It's not, the attention that we have in our visual perception is not that attention. But if you look at the, if you read the historic, the seminal paper of 2017 on transformer, there is no reference.

There is no ambition in there that transformer wanna imitate human brain.

They wanna understand language. That's a machine to understand human language. That's a correlation analysis machine. Fantastic. We are using it. What that deviated from the initial historic road toward imitating human brain and it's now getting even worse.

Mamba is another technology that is supposed to be much better and is better with faster is lean compared [00:43:00] to transformers. We have to see how it performs. For some big cases we will see, but still the same thing. People sit down, very smart people, very knowledgeable computer scientists, they sit down and manually design a solution.

This is contradictory to what a AI community was telling everybody 20 years ago. So that's a historic problem for AI community, which means at some point, transformers, mamba, or anything else will collapse. They will find a niche to operate on for not very sensitive applications, most likely.

And then new topologies, new architectures, new type of networks have to come. If you wanna continue maintain the ambition that we wanna mimic the human brain because that this is not the human brain. That somebody's, this is a state here, and then I selectively select this state, and then this is the correlation. [00:44:00]

This is manual design. We scorned that. We rejected that. We negated that 20 years ago.

Aleks: Okay. So that brings me to like, when you look at the classification of AI and everybody who does not know about this is afraid of AI taking over. You just said why it's not gonna take over. And you just said, okay, this is why we're still in the narrow AI.

So the narrow general and the whatever that is theory of mind or like the super AI, why we are still operating and conceptually are stuck in the narrow AI bracket. So to say…

Hamid: If I sum it all of this up you can see it. And we saw it in our latest validation, AI, when I say AI, whatever is there large language model, foundation model, they have hit a ceiling link in their performance. They cannot go beyond this.

Now if you go multimodal, we will push this ceiling a little bit higher. [00:45:00] Definitely. And it'll be better. We'll use it for many cases. There is no question about that. But that's a problem. Why we have hit this ceiling cap. We cannot go beyond this. And we see it in all possible ways.

We saw it for search and retrieval that the foundation models trained with hundreds of millions of tissue patches could not figure things out unless they need additional fine tuning. Okay, so again, we are violating multiple computer science information, theoretical principles. So does that mean, what does that mean?

Nothing there is very good that we have large language models. We have foundation models. We have to just be clear what they are good for.

Aleks: Yes.

Hamid: And where are we right now? Are we are in research. We are in clinical deployment.

Challenges in AI Specialization

Hamid: This is good for triaging. Less sensitive. Maybe we can triage a lot by… 

Aleks: And I think people are afraid of stating that because then, [00:46:00] like you're so much more honest if you can say this is for this purpose, not for something else.

Hamid: Yeah and people don't, that, that's the issue again with the specialization also is applicable to the to the AI researchers, senior researchers, most of the time they have already specialized. So younger researchers they take, they try to stay as general as possible. They need the publication, they need to advance in the academic ranks.

But if you specialize, you will go in detail. For example, I give you a very simple example. There is a misconception for image retrieval. If I give you a tissue and you go and search most people who are not in the field, they think that's a very easy task.

Even the fun to understand the complexity of search and retrieval.

They think if we can push the whole site imaging into one vector. There is a field established field that is called vector search. Any document, any modality, any data that [00:47:00] you can represent with a set of numbers that we call a vector, it can be searched, yes. But searching for tissue is a very complex, is a problem.

That again, in computer science, we call it NP hard, which is a fancy word for impossible. 

Aleks: Okay? 

Hamid: It's a fancy word. So it's NP hard. So non-poly-mally hard. So it's if you have lost your cell phone how difficult is to find your cell phone? Have you lost it in this room? That's easy. Have you lost it?

In this city, that's a bit more difficult if you have lost it in Milky Way Galaxy, that's NPR. How do I search in Milky Way Galaxy? If I'm using general purpose, I wanna understand all the cell nuclear shapes and connective tissue for any body part. If I like, just look at the skin, I get scared with the diversity of an anatomical, diversity of the skin, and you wanna address everything, you cannot have [00:48:00] address, everything.

Maybe we can in 50 years, I don't know. At the moment we can, at the moment, we have to specialize. So put things in place. We know what we have, what problems they have, what they are good for, but we have not been able to deliver for clinical utility. That bothers me. I, part of it is egoistic is the ego of academic mission.

I wanna contribute, and it's for me as a computer scientist, is more pressing to contribute. To the patient wellbeing because I don't even have direct connection to the patient. I don't see the patient. I'm trying to help the pathologist. The pathologist and oncologist will help the patient. So it is for me more pressing to deliver something that the pathologist can really use.

And when I see our pathologist, how knowledgeable they are, how good they are, how fast they work, I have been shadowing our pathologists well, how they work, oh my god, they are good. [00:49:00]They're fast. If AI wants to beat pathologists, which will not happen in the foreseeable future, I even after I retired, I don't think so.

So they are good. They're really good. They understand anatomic pathology, you understand the diseases. They are good, they're responsible is very tough to beat them with some computerized method. So we have to be serious about this. Give you another rather marginal, which is important example. The factor that we use to measure the first one accuracy.

If you choose area under the curve, that's useless. 

Aleks: That's in every paper…

Hamid: That's useless. When we look if we wanna test the accuracy as a touring test, we have to use micro average of a fun score. And you are lucky if you get to 60% at the moment.

You can get to 60%. The data is in balance, and then we trick it [00:50:00] and say, oh, okay, so let me get weight one, and then I push it up.

I get to 82% I can publish.

Aleks: It's a science like how to pick the right test so that your results are in the 90% and you can do it

Academic Honesty and Integrity

Hamid: Now the problem is, so we would we, academic honesty and integrity is an ongoing issue for all of us. And there are two things that we we, something is academically dishonest.

One is. One is that you manufacture data. Maybe by selection, not actively, or you omit something.

So for example, if I have a method that is not consistent and I only provide the accuracy, but I don't say that it's inconsistent, I'm omitting fact.

If I don't say that my method takes two hours instead of 20 seconds, that's omitting a fact.

That's academic dishonesty. You cannot deliver product for healthcare if you do research [00:51:00] list. So that's something for all of us, and especially with AI such as sensitive, emerging technology. Makes it even more pressing that we have to be serious about these metrics and the way that we do things.

Aleks: So yeah, dishonesty, let me tell you, when I realized how easy it is, like how much easier it is to be dishonest and honest when there is no guarantee that you're gonna have result from your research. It's a funny system because like I told you, I tried to publish my results from my PhD and the moment I realized, oh, I could have fantastic results, was when I was pipetting my PCR stuff.

And I'm like, how about I pipette my positive control into where my bands should be? Of course, I didn't do it and then I was, it was difficult for me to publish, but I didn't do it because I thought, okay, if I actually did it, how much more difficult it would be for me to later confirm these results, but it's not something that people don't do because how many of those [00:52:00] high impact factor nature publications from, I don't know, 10 years ago, 20 years ago, have been disproven? It's a concern. And do you have thoughts on how to prevent it other than appeal to people's honesty?

Hamid: That, that's a problem for all of us. Basically in academia, we are living under this democracy sport of publisher parish and publication is everything. So publication is the way to promotion to higher salary to fame, which is most likely the most appealing aspect of it, or the ego of academicians and researchers.

Aleks: It sure is not the money that I know.

Hamid: No, it's not the money. But again, so it's a self-realization that drives many of us into research to take your ideas and create it. But the system is such. The peer review system is working. Science is a self-regulating system. Long term, it works and things get filter out. But short term, [00:53:00] we suffer from a lot of imperfections in the peer review, which is us, which we are writers, outers, and reviewers at the same time.

And that drive, that push corrupt us in one way or another at some points to different degrees. And we may just be a little bit more lenient to do this and that to just get things published because we have to publish. And that's where, again, omitting something. Or not reporting this and that becomes really something that you can push your papers forward. Of course, nowadays you have to do more. You have to have really large number of data. You have to have a beautiful, appealing visual. Visuals and diagrams and images that is associated with quality. It's very difficult right now to have, [00:54:00] again, expert reviews.

I don't know when I see journals that are publishing medical pathology related stuff, and it's a very niche field and everybody knows everybody, and I ask around, yes, did you receive this paper for review? Nobody has reviewed that paper. So who is reviewing those papers? It seems when I send stuff and I don't like Alex to read my paper, so I just exclude Alex the journals, the worst thing for journals and public publications, publishers are when they are unpaid positions.

When professors become editor in chief your entire academic reputation is tied into. Quality and integrity of the publication. But when you are on a paid position, that's a different story. I cannot judge what does that mean exactly. But that becomes more of a business [00:55:00] than a academic undertaking than I see many journals that and that's another point of the corrupt corruption.

I'm being, I'm using the word corruption in a loosely way. So not people freak out and not being fully integrated within the academic honesty for all of us for administrator, for researchers, for professors, for publish. All of us. I'm not taking myself out. It applies to all of us. And one of the recent trends is that you send papers and then within 24 hours the editor rejects to review your paper, he or she rejects to send it out.

Publication Bias and Institutional Favoritism

Hamid: And of course there is a lot of institutional favoritism, a lot of bias there. That's where we, all of us, we have been sleeping and we don't talk about diversity when it comes to that. So a very small number of institutions are the ones who are publishing [00:56:00] most in high impact places. One might say, wow, because they have a lot of talent and there is no doubt that some institutions may have more talent than others.

There is also no doubt that there is institutional bias among publishers. And then if a paper comes from University X in a developing country, there is much more likely to just not even get reviewed compared to a highly reputational institu… research institution or university from Europe or from US. So we can't do much about it.

All of us suffer from this, and all of us we do. And that's also driving us to violate academic honesty because we wanna be part of that big league. And so that's when I see, oh I'm coming from an institution that is basically an underdog. We cannot publish as easily as this and that.

So we are going through a huge transition with AI and digital pathology. [00:57:00] I wanna be part of it. So that makes us here under just to. Close our eyes and say, okay, that's okay. So let's do this, let's go this way, which is also contributing to that many papers are being published in AI, in medicine and in pathology that don't amount to anything.

So you look at it on the surface, is fantastic paper. And then you go into it, you see it has left and right problems is not really solving any issue.

Aleks: Okay?

Hamid: And that's why we don't have enough innovation at the moment. We have become simply users. Of the technologies that are mainly being put forward by big corporations.

So from transformers to TensorFlow to everything else, all pieces of technology. We are not innovating anymore. And acade… in academia researchers, so there are exceptions here and there. It started with innovation [00:58:00] at academia with Jeff Hinton and Benji and Lun and everybody. But then he migrated to the bigger corporations because they have more resources, they have more easily access to talent and all that.

It is a point of concern because when we become users only then, okay, so we will be depending. On those corporations. I'm not saying it's something bad, but I'm just worried that innovative power is getting concentrated in just one place. That's the  that's the issue.

Aleks: So most of the people are gonna be users and part of this exercise is, okay, we want those people who are not contributing to the innovation, to the development, to actually benefit from the technology.

And on one hand I'm thinking, okay, if a big corporation gets this technology, they have a better chance to actually bring it to the bench. Bringing it to the patient or bringing it to, [00:59:00] in the normal life to consumers. Yeah. But that's like a catch 22 guess. 

Hamid: That's correct. That's right.

And I have done a lot of work with companies small ones, medium ones, relatively big ones.

Aleks: Do you have any companies, do you have any spins or things? No I don't have any.

Hamid: So I, since three, since four years, I don't have even any industrial contacts. So my hospital is much more sensitive to that, I'm not any consulting, nothing at the moment. Okay. Since four years. It's, it is in one aspect is emancipating, but in some other aspect is restrictive, so you have to just find out. 

Aleks: It's a choice you make. You either…

Hamid:  It is, it's a choice make, and companies are actually less prone to corruption because they are driven by profit and they have to deliver.

They can, I had a case back, we got back to the authors of a paper and said, what is about this and this? And they said, oh, [01:00:00] we have moved on. We don't work on that paper, on that subject anymore. 

Aleks: Okay. 

Hamid: You published it barely a year ago. Why you are not working on it again, a company can not say that easily.

They have to stick, they have to provide service, they have to make sure that their products are safe. We are less subject to responsibility for what we publish. We publish and go. And if you ask me about a paper that I published five years ago, I don't know, we just published…

Aleks: Yeah, it's like a project based work.

You work on a project, you have a certain, amount of money, a grant to put towards it, and then you move on. And…

Hamid: And this has nothing to do with AI and pathology, but there's now being magnified again. We are going through the historic transition and it's multiple things. Post pandemic effects, digitization of pathology, AI.

So three, at least three major events are intersecting. And then we are in the middle of it and everybody wants to contribute, [01:01:00] which is fantastic, which is great. But then I always get scared. I, people that have barely started working in less than two, three years, and I. People call them expert.

Okay. I don't even know we have something as expert in some fields of AI foundation models invented barely two, three years ago, who is an expert in foundation model.

I don't think so. So that's worrisome that we take this words loosely and then

Aleks: I know where it comes from because it's so cutting edge that the moment you've you learn about it enough to be able to explain what it is about more or less to somebody else, they consider you an expert because it's new.

Nobody knows, especially people who are not in the field, who are not working on it. And to me, this expert or this type of expertise is too much attributed to people who are talking about it because it's just recently developed. On the other hand I am a proponent [01:02:00] of people having a working, no, especially in digital pathology, in an interdisciplinary field of everybody in the team having a working knowledge of the other contributors.

And I believe this is important because if I have a working knowledge of okay, like the basic concepts of computer vision, what is segmentation, what is classification, all these things, I know how to apply it to my area of expertise and who to go to ask for this application. That accounts for IT, for computer scientists for quality assurance, for everybody who's involved in this, in this group.

And I think within the group there is an understanding, okay, this is a working knowledge, but if you speak about it, if you speak to people about pathology and they wouldn't know your background, would somebody say you are a pathologist? They would right? Because…

Hamid:  I had to correct Google [01:003:00] had to listed me their site and above my picture with right pathologist and I had to contact them with a lot of trouble and say, look, I'm a computer scientist. I'm not a pathologist.

Aleks: Exactly. But yet you have so much more knowledge of pathology than anybody else outside of the digital pathology circle.

And I know this happened to Anant Madabushi as well, that I didn't know if he was a pathologist or not before I talked to him on the podcast. I think that's part of the, of this well, confusion. But what you're referring to is like the cutting edge thing that people just barely started developing.

So not even the developers are experts in this technology.

Hamid: There is one thing that is there is very encouraging, very good, and at the same time is very dangerous.

Decentralized AI in Medicine

Hamid: And that's the decentralized approach to AI. Democratization of AI, so to speak. Everybody is doing AI. You can just go and download some code and some data and use some [01:04:00] available GPU power or are publicly do it as an a student, as a trainee.

Many hospitals are following this scheme that physicians hire some postdoc and then they start doing AI. So these a decentralized approach to AI, not that the hospital says, if you wanna do AI, you have to go to this section, this department, and you have to talk to them. No, let everybody do it. Great.

Fantastic. That's because a big part of the technology has been made accessible. The inventors did not file patent, and the corporations have graciously made it available. The problem is this, the decentralized approach within the society, within an academic institution and within hospitals, which is important to us, works only for very small projects.

If you wanna change the face of medicine, if you wanna do big things, if you wanna bring impactful projects [01:05:00] that change the way the clinical workflow is established, you cannot do decentralized. You have to find people for the decentralized approach. You can get an intern who acquainted with AI five months ago.

People are young, they are smart, they figure it out. Everything is public. There are good YouTube videos, there are tutorials, they are Coursera, there's this and that. They figure out, they do it. If you wanna do a whole big project, is a big project does not only contain AI that you grab a very smart young soul and say he or she will do it.

It contains a lot more. So that's where you need expertise, you need track record and so on. That, that's something actually again, the corporations do a better job than hospitals and universities. They when it comes to really sensitive things, they emphasize and magnify track record and competency and push [01:06:00] the decentralized approach aside.

No. We have to focus, we have to bring competency, we have to do this and that. So that's a point of concern that people have to distinguish. So if every pathologist does AI, that's fantastic, but that's for small projects. If you wanna do big projects, really bring products and services that will affect all of us.

You cannot do that. You have to select some people, put them in place, give them resources, and then they can do it because they have all the required competency and knowledge for that project.

Aleks: So question here, what do you think about deployment of innovation? Because what I'm frustrated with is how difficult it is to propagate this change.

The example I give is, okay, if there is a company or healthcare system or. Whatever, like institution that grew by acquisition. You can pick like a site, you can pick a player, you can pick one location [01:07:00] that's gonna test, implement, and then you can maybe deploy and iterate fast. What about if it's something that's centralized I'm frustrated.

So we started talking about the retrieval augmented generation, and you said that there is this renaissance of the retrieval of information, how difficult it is to implement it. I don't see it being implemented. No,

Hamid: because again, two, two things. So the data is not available in the silos of hospitals and healthcare providers.

They don't have their docs in row. Okay. Nobody does, to my knowledge. Nobody has the entirety of recent data. Not even historically done, let's say the past five years, all of it available somewhere centrally accessible such that you can. Because you cannot bring the computational power to the data.

You have to bring the computational power to the data, but the computational power, the GPU and cloud is already there, so you have to bring the data to the computation. [01:08:00]That's why at the moment is going very slow, because we are doing the opposite.

The wisdom was always, you cannot shuffle the data around.

You have to bring the computer to the data, but now we realize my data is an LIS and Pax and this and that. I have to bring it to the cloud. I have to bring it to the G power. That's what has slowed everybody down in terms of real big, impactful project. I'm not talking about, again, sandbox projects that we do, and I take 10,000, I have 50,000, a hundred thousand, even a million cases, and I do this and that.

So the data is not available, and even if it was available within the silos of hospitals still nationally or even globally, we do not have that available. That the information retrieval will change everything, not just for us in the United States, but also across the globe if it is diverse. So if I can [01:09:00] tap into the African, the European the Chinese databases, or they contribute to a center that is copied across Europe and Africa and so on.

So as long as that doesn't happen, which is a matter for the next decade. We'll not do big things.

Aleks: So what is your thought on the decentralized approaches, the ones federated learning and all the cousins, where the concept is to actually bring the computer to the data? Would it be possible at the moment?

Would it be possible just like even for one institution and would this be helpful for RAG or do we need repositories?

Hamid: I put a lot of hope into that and back then we worked on it. We haven't not published. We have a major one with proxy federated learning. We published recently with, in collaboration with multiple other research groups.

The [01:10:00] problem is, and we hoped that will remove a lot of problems with patient privacy. I don't wanna share the data and so on. But it seems hospitals are not interested in that. So that's not, that's my sense when I go and across the nation and talk to everybody, hospitals are not really interested in doing that.

So one, one pretext is that we have to put a lot of infrastructure in place to be able to do that because you still, you need secure automated connections and so on. Doesn't matter what technology you use. All different types of federated learning. But I think the hesitancy is more that at the moment, the hospitals, the CEOs and managers and people in charge of the finances have not figured out yet what will happen with the value of the data.

One thing. And what will happen with the traceability of patient information, even if it is embedded within, I think [01:11:00] the first one is more important for most hospitals to be honest. Yeah. So they know that's gold and they just don't wanna share their gold yet. And nobody would confess that the, that was an no.

This is not the point. The point is patient privacy, we don't know. And then we need infrastructure. But it's also, this is an El Dorado. This is the largest El Dorado at the beginning of 21st century. You don't wanna lose that. You wanna make sure that you capitalize on the data.

Aleks: Definitely. So speaking of that, like you mentioned, diversity.

So if we had repositories of, if we had databases from different places, we would get access to diverse data that then would translate into better performance of the algorithms, better performance of the models.

Data Bias and Healthcare Disparities

Aleks: Let's talk about the bias in the data that's going into training those models. Because we said, we just said, or you just said that.

Okay. Most publications are published [01:12:00] from a highly respected institutions which happen to be big institutions with a lot of patients, but these are only the patients from this institution. Thoughts on that? How. What does it mean? Like how bad it is that it's bias, because I have mixed feelings about it first.

Okay. If you wanna have this as a generali and that ties into what we said because if you want it to be generalizable, if you only have a bias dataset, it's not generalizable. But do you need to have a generalizable then just say it's for this particular data set. Is that a problem? 

Hamid: It would be a problem in many cases because that would create disparity in healthcare delivered so that if a… 

Aleks: We already have that propagate it and encourage it.

Hamid: We have it, but we don't want to encourage it, that we do If the government does its job, the administration does its job, it should not encourage it that we create more disparity now that we have a cutting edge technology. I put the ideological political side of it aside so that [01:13:00] I don't want to deal with that as a scientist.

What we need, diversity, biological diversity, anatomical diversity.

We need that. We need to be able, and the point is, if I get 10 hospitals to work with each other, the general bias will be reduced, of course. But the bias that we may be socially interested in politically, which is let's say societal bias, racial bias, and so on. Which has been prevalent in the society, that will still be in the data.

AI may be racist because the data that comes is racist. And the data that comes is racist because we are racist. So there is no way around it. We cannot wash ourselves and say no, I'm not racist. Everybody else is racist. So this is now a problem and too much to ask from the AI community to take care of a human society problem.

Racism, discrimination is a real problem that some groups for some diseases, if you take for example about prostate [00:14:00] and Caucasian versus African Americans we have disparity historically speaking and in clinical trials, in drug development all that. As AI community, we cannot compensate for centuries of discrimination and racism that is prevalent in the human society.

We can point to it. We can try to do something, but nobody has the moral high ground. So I, we cannot be better than the human society. AI cannot be better than human society. AI will be as good as, or as bad as human society.

Aleks: Question here, let's say we have the solutions for big institutions specific, societal demographic profile, but we don't wanna use it and label it that it's just for them because we propagate disparity in access to care, [01:15:00] but then we deprive them from something that they be, they may be, would benefit from.

So it's another another side of the coin you cannot win.

Hamid: Yeah. And what, why that's why most likely, again, foundation models are not a good solution because they are big, they are clumsy, they need resources. And that the mass of healthcare providers, clinics, small clinics, community hospitals, all them will be most likely excluded.

They do not have the resources of big hospitals. They cannot license big technology. They don't have the team to maintain it, and so on. So we need more pervasive type of technology. We need commercialization of technology such that companies, in order to make cells, make things cheaper, we cannot do it.

As a researcher, I cannot do it. I can put my code on GitHub and say, okay, if you wanna play with it, download it. But that did not solve the problems that we have in medicine. At the end of the day, somebody have to make it a commercial grade solution, device software, [01:16:00] and then driven by the free market economy, be a, be forced to provide high quality at low price the way that has worked out for the past millennia anyway.

I'm not worried about that side. It'll happen. It'll happen, but… 

Aleks: It has already happened for so many things right.

Hamid: It has happened. Not at the level that we expect. So AI has. Or we have, as AI researchers, as AI users, we have promised a lot, we have not delivered much.

Future of AI in Medicine

Hamid: So that's a little bit that we are maybe are here at the top and then we have to come down and approach the access of reality at some point, hopefully within the next two years without creating the third or fourth AI vendor.

I don't know. So hopefully not.

Aleks: So I see one thing that you said. It has to go into devices. So we have apps on the phone. They have to be lean, they have to be fast putting stuff in medical devices, like [01:17:00] all those algorithms. And that needs to be super nimble.

And the other thing that I've seen, like a parallel discussion or still going on is okay. The underserved institutions, underserved areas, underserved countries, they do not have access to whole slide imaging technology. And for a long time this was like, oh, then we're excluding them from contributing. Now what I'm seeing, the stuff that work, the AI work image analysis work that was done on whole slide images, this can be deployed on static images.

So it went, okay. It was developed by those who had resources. It was developed on a technology that may not be deployable, scalable for places, institutions without resources. But because of this work, now you can actually use it on static images and take taking static images can be done in any place where you have a microscope and a camera microscope and a phone.

Hamid: Many hospitals get second opinion requests, and [01:18:00] usually right now the glass lights are being packed. And seriously. On the trains and cars and airplanes that send really physically for a second opinion. Many hospitals, big, larger ones are thinking when they when those glass lights arrive for consultation, they digitize them and then they distribute them across the hospital for second opinion. It’s conceivable that at some point we didn't have digitization centers or regional ones. That you can just send it there and they digitize and submit it for you. So that could be a new branch of technology and business.

Aleks: And there are already obviously a lot of people are thinking about it, but I know a couple of, initiatives. One of them is the Digital Diagnostics Foundation by started by Matt Levitt. I dunno if you've heard of them. They are starting like central histology labs, where basically institutions without the access to digitization, [01:19:00] even to pathology, histopathology labs, they can send things there and then have a distributed and, but that was a question when I was asking myself, okay, if I had a procedure done, something like my tissue would be taken out and I would like to have my digital slides.

At the moment, the way to request it is I need to get my glass slides and I have a microscope, so that's fine. And I'm a pathologist. I can look at it. My, I have pathologist friends who I can consult, but for a person who is outside of the medical field, this is useless. But in radiology you get a CD or whatever.

The CD is already outdated, but you get your images. This is part of the package of your data. And that ties into data discussion that we had before you get the report, but report is interpretation. If you wanna, as a patient, be in charge of your treatment or I don't know, request a second opinion on your own.

The way to get those, this is like impossible [01:20:00] for pathology.

Hamid: It has to be if you wanna do it really fast, it has to be a, some sort of web service, secured web service. That you upload your whole site image or even patches. Selected patches. There are many research groups. Pathologists share information on social media and they usually just upload a mosaic of multiple patches.

Aleks: Diagnostic patches. Yeah. 

Hamid: That's only you think so? That has what the practice has shown when pathologists want to consult each other. And the easiest way, the fastest way, the more accessible way would be to let pathologists anywhere on the planet to upload those patches. Most likely we will not do WSI uploading because it is not necessary.

Aleks: It's not necessary. 

Hamid: You look at it. You look at it and you say, this region, I don't know what that is. So you don't need the entire whole slide image. Which could be, most of it could be, I don't know, just fat and you don't,

Aleks: But not only for consultation, also for patient records, like you get your labs from any checkup, you get [01:21:00] your images from their radiology exam.

Why would you not get the patches from a pathology exam?

Hamid: Yeah. We can deploy technologies that you download and you have on your desktop, and you can use it lightweight, but this type of technologies, the conversational AI, the large language models with augmented gen, they have to be centralized. So that database, that knowledge base should be somewhere that is easily accessible, manageable, maintainable, and so on.

I don't see any way around it. So the future is those disruptive revolutionary changes in medicine in general, and histopathology in particular have to be cloud-based. They are somewhere and then you access them. So instead of just sending it, you just log in. Maybe just for specific hospitals.

If they are specialized, you just log in into the hospital's platform and say, upload your images and request a second opinion. [01:22:00] That would be one model. Of course, in a perfect world, we don't wanna do that. We don't wanna do, every hospital does that. 

Aleks: And we have it right now with all the electronical health record.

Systems, and we still do facts. That is the cutest thing ever.

Hamid: Yeah. But that would create also additional disparities among which hospitals, the ones that at the moment they have a lot of data, they will be the gigantic monsters and they will dominate everybody. That's not good either. So we want everybody to…

Aleks: How do you so in the future, how or like the nearest future, what do you see to empower the small institutions?

So one end of the spectrum is, okay, the big institutions have resources, but they move slow. The other end of the spectrum. And that has been like, that's life. In smaller institution, you can iterate faster. So you can actually deploy stuff. And that's more cutting edge. [01:23:00] In a smaller institution, you're gonna have everything in between.

What do you see is gonna drive the adoption in the nearest future?

Hamid: The smallest one, everybody else they have the data is gold because smaller hospitals and clinics, clearly you cannot play in the major AI league because it's difficult for you to attract AI talent. Bigger hospitals will do that.

They grab everybody and say, okay, you come to me I give you the possibility to do your research. But the smaller clinics the main point for them is their data. If they have highly specialized data, if they have specific type of population. The demographic. So that's gold. If they organize their data and make it in a way that whenever is possible, and say from patient privacy.

They can make it accessible. [01:24:00] That would be…

Aleks: and capitalize on that… 

Hamid: …on revenue. Yes, they can. 

Aleks: I don't think they I don't think there is awareness of that, that just organizing your highly specialized, even if it's just a specific demographic data in a way that you can say to the world, Hey, you wanna include this population?

Here is how you can do it, and here's how you can help our patients as well. And this is how we're gonna capitalize on that. So that's a super important point and

Hamid: Things will change again. So the fascination of large language model, we are hitting the cab, we are hitting the ceiling. We know that's not reliable.

We know that we cannot use it for clinical. Utility is loose, is in the air, is too subjective, is too flimsy. You cannot do that. So we need to connect it to evidence. There, there is no way we have to connect it to evidence, but the evidence is not available. It's not organized, it's not retrievable. That's something that we are all, [01:25:00] we are just closing our eyes at the moment.

For researchers, it is easy. You find something, you do some experiment, you publish papers, that's great. But for hospitals, for healthcare providers, for people who really have the possibility for policy makers, for administration. So we have to look into that. And some initiative has to come from the government to make, enable some hospitals to gather and join forces, maybe create a national initiative.

Okay. To create a source. Source of the, if us does it, everybody else will follow. Definitely. So a national archive of histopathology data, let's say. So we are still looking uni modality as from the perspective, the bias perspective of histopathology, but. Most likely we have to bring radiology and other lab data and patient demographic and social determinants and all that.

We have to bring all of it in one place. [01:26:00] This is something that is very unlikely that happens without any framework from supported by the government, by the tax payer, basically.

Aleks: Question, why did TCGA not become that thing? Why didn't it grow like it has 40,000 cases. 40,000 different cases with different modalities, but that's it.


Hamid: One, one thing, the software was not there. The technology was not there yet, most likely. I don't know. I'm, I don't remember. I have read a lot about TCGA, but I don't know, and many people that I know have been involved at the beginning

Of formation of TCGA. I wasn't even involved in histopathology when it was formed.

Aleks: Oh really? 

Hamid: And yeah,

Aleks: for me it has been there forever as well.

Hamid: It's a big question. It is a big question. Most likely software was not there. AI was not there. Digitization was not cheap and accessible as it is today.

All those factors. But it was not created with the vision [01:27:00] of making it a organized, indexed, searchable.

Easily accessible archive still is. You have to painfully search and find something and then just download it, but you cannot really perform anything on that. The data itself, which is a static archive, it's not a dynamic archive. We need dynamic archive that the computational power is right there.

Maybe this is the only, the main thing that is missing at the or TCGA, maybe TCGA 2.0 should be on a cloud with GPU power, right beside it. You may wanna pay for it to use it, but you can do it if you have.

Aleks: So another thing that I'm seeing, so there are like two extremes. When I get my pub alerts about digital pathology and AI every week, there are two extremes.

One extreme is what we were talking about are the foundation models and the heavy computation, transformer based stuff. The other thing is smaller journals [01:28:00] and more like groups that I have not heard about. Different countries publishing papers about, oh, this group, this pathology group used this particular algorithm.

And stuff that's basic, but I see it coming up. Like for example, oh they did Kai 67, they did their internal validation and now they are okay with using AI. The same for PDL1, ERPR, like things that have been there. Since the beginning of pathology image analysis. I see it like a resurgence of this.

And to me it means, okay, there, there is increased adoption, not only by the major players. It's a good sign to me. It's not the cutting edge stuff. But I think you, okay, we're actually moving forward with the things that can be applicable and a hurdle there is, okay, [01:29:00] when you are in your work workflow as a pathologist and you wanna use them, they have to be easily accessible currently, software issues like, oh, you have to change the software.

Then it disrupts your workflow, even if it's a digital workflow. But I see it as a good sign. What good signs do you see with leveraging what we already have? Because we have a lot and the deployment is, very punctual, but I see it happening and I’m positive when I see it, specifically when, like you read the paper and it's not rocket science, but I always feel like it's heartwarming to me when I see, oh, this pathology group tried this software, oh, and they bought it from this company that has this a product.

And they decided, okay, it's good. They did enough validation to actually use it for patient decision making or decision support. What are your thoughts on that?

Hamid: So that definitely the decentralized approach, which is also pervasive within the society and research community has contributed to that. And it's very [01:30:00] encouraging.

I love it. That is not like that anymore. That only five researchers at five major hospitals are the ones who are setting the tone. We, that would be the end of research. Everybody's contributing now, which is great. This is the way that science works. And then yes, at the beginning, maybe people just read Hamid because Hamid is coming from a major institution.

After two, three years, they realize, oh, a small university from remote village in Japan has a fantastic idea. So that's the way that science has worked in the past 300 years after the enlightenment. So it is very encouraging. I love it. Things are happening. Again, a lot of unchecked, unverified, sketchy results from everybody, not just from a small one, from medium one, from big one.

Everybody's doing that. But that's fine. Things will get filter out, and then we will get the pieces of gold remaining in, in [01:31:00] our sea. That's fine. So this will go ahead, but, and a lot of good things are happening at a very small scale. We are, as we speak, we are using large language models. Our physicians are using it.

Just to summarize reports. So a new patient comes, has 40 pages of report. You either, you have to read it, you can dump it into a large language model. Can you summarize it? Or you can ask questions. Very not much risk. But again we have to find pieces, bits and pieces of technology that is safe at the, at this level and we can use it.

So it's not clearly, it's not clinical utility. He's very remotely assistive to just to the physician to do his her job. A lot of that will happen. A lot of small pieces of technology are getting integrated at a small scale, but very large number of people are doing that. That's great. We need to be, we [01:32:00] need to.

But the thing is, and maybe that's my problem. I think I keep thinking at the big picture and I say, how can we change medicine? And that's not gonna happen with decentralized AI. That's not gonna decentralized AI research. That's not gonna happen. We need concerted joint organized effort supported by big money.

So multiple hospitals, ideally multiple countries initiated, and then you can start working something. I see some attempts in Europe. Europeans are sometimes faster than others. Not always, but sometimes Americans are usually behind. But then when they get started, they can just, then they go fast and they go really fast because the resources are there.

Hopefully next two years is the time that something like that should happen. Creating large initiatives. And then those are small [01:33:00] contributions also. All of them will start flowing toward that funnel and bundle to be much more effective than individually, which it takes a long time to be appreciated and used.

Patient Advocacy and Data Digitization

Aleks: Do you think anything of this can be driven by a patient in any way? In any shape or form? Like I'm thinking about the digitization of pathology images. Could this be driven by the patients, by them requesting that their tissue images are digitized or what are your thoughts on that?

Hamid: Could be patient advocacy groups could play a role. Definitely. But that would be at the level of policy making. Allocated resources impacting the agenda of leaderships of hospitals and granting agencies at that, at those levels, yes. But not directly for us [01:34:00] as pathologists and researchers.

Not for us. I don't see it that directly how it can happen. But patient advocacy groups, absolutely. They are stakeholders. In in the entire picture, and they can push things forward.

Current Projects and Future Directions

Aleks: Anything that you can share about what you're working on right now? You mentioned the image search and the retrieval.

Anything that you can reveal?

Hamid: Yeah, so again, I'm, I gave up the idea of a foundation model alone. And so at the moment Oh, really said, okay, we will specialize.

That big picture that we set, we have conversational AI, but you have also information with retrieval. You have multimodality. So three big components we are doing it for, just for breast cancer.

Aleks: Okay. 

Hamid: I, this was a very painful decision to reduce your ambition and say, okay, we will not do it for histopathology. We will do it for histopathology of one case, let's say breast cancer. [01:35:00] And then we had to go deep into it. We had to figure out the triaging challenges, the diagnostic challenges, the treatment planning.

Challenges and every case is different, and then you are putting it together. So we are trying to create I'm still working on what I call the Mayo Atlas in my own environment, which is that collection of technologies, multimodal, conversational AI connected to information retrieval. So that's a the word atlas is loaded.

It has been in used for almost more than a century, to my knowledge, but it doesn't matter what we call it. So is it intelligent? Next generation information system, whatever that is, people wanna throw in always the word AI to make it more, sounds more important. But information retrieval is a very powerful technology that we have not exploited for medicine yet.

At no hospital. We can search for RNA sequencing, we can search for [01:36:00] radiology images. We can search for. Histopathology that we can only search by typing some text, which is very limited. It cannot give us anything. And that's why when you see the RAG’s concepts being used, the large language models, they, for example, they connect to Wikipedia, they use media as source or, and then if you say, who said that?

And then they give you a link that that won't work for us in medicine, we need high quality clinical data, verified free from variability that we can use as a base, as a solid foundation to fact check any prediction, not just by AI. Anybody else? Any trajectory, any probabilistic statement that comes from AI or any other technology has to be fact-checked by evidence.

And when we say evidence-based medicine, which is. Again, connected [01:37:00] and is sometimes is exactly the same thing as individualized medicine. None of that works without relying on historic data. And historic data at the moment is under digital dust. You are not using it because it's not accessible after you upload your whole slide images, you are not using it anymore.

It's just there and you just pay for the storage.

Aleks: I'm keeping fingers crossed for Mayo to be the pioneer of the way of structuring data in an institution that can be then modeled so that we can actually capitalize on the, we have to be patient.

Hamid: We have to be patient. So I understand that young colleagues may not be able to exercise patient, but if you wanna go long, you have to go slow.

So it's, and that's very painful to go slow. I'm learning that in the past three years I have. I didn't think I, there are still things that I can learn in terms of being patient, but it is a whole different level of patient. If you wanna make sure that [01:38:00] there is reliable, the quality is there and it's beneficial to the patient.

Conclusion and Final Thoughts

Aleks: Thank you so much and I hope to see you again online, offline at the conference, and we will definitely talk again here on the podcast.

Hamid: Thank you for the opportunity. Thank you.

Aleks: Thank you so much for listening. It means you are a true digital pathology trailblazer. There are definitely things that I'm gonna be quoting from this episode, for example, real research that ends up in the product.

And I also have learned a lot about AI development and AI concepts from this episode. But if you're just starting your AI journey and how AI is applied in pathology, there is a video I recorded that explains this in a little bit higher level. That is a good video to watch in addition to this one. So go ahead, check the AI in Pathology video, and I talk to you the next episode.