
Digital Pathology Podcast
Digital Pathology Podcast
152: AI in Pathology, ML-Ops, and the Future of Diagnostics – 7-Part Livestream 7/7
AI in Pathology: ML-Ops and the Future of Diagnostics
What if the most advanced AI models we’re building today are doomed to die in the machine learning graveyard? 🤯 That’s the haunting question I tackled in the final episode of our 7-part series exploring the Modern Pathology AI publications.
In this session, I explored machine learning operations (ML-Ops)—what they mean for digital pathology —and why even the most brilliant algorithm can fail without proper deployment strategies, data infrastructure, and lifecycle management.
But we don’t stop there. I take you on a future-forward tour through multi-agent frameworks, edge computing, AI deployment strategies, and even virtual/augmented reality for medical education. This isn’t sci-fi. This is happening now, and as pathology professionals, we need to be prepared.
🔗 Full episode reference:
Modern Pathology - Article 7: AI in Pathology ML-Ops and the Future of Diagnostics
Read the paper
🔍 Episode Highlights & Timestamps
[00:00] – Tech check, community shout-outs, and livestream reflections
[02:00] – Overview of ML-Ops: What it is and why pathologists should care
[03:45] – What’s a Machine Learning Graveyard? Personal examples of models I’ve built that went nowhere
[05:30] – Machine learning platforms: from QPath to commercial image analysis tools
[06:45] – The lifecycle of ML models: Development, deployment, and monitoring
[09:00] – Mayo Clinic and Techcyte partnership: Real-world deployment integration
[12:30] – Frameworks & DevOps tools: Docker, Git, version control, metadata mapping
[14:30] – Model cards in pathology: Structuring ML model metadata
[16:30] – Deployment strategies: On-premise, cloud, and edge computing
[20:00] – PromanA and QA via edge computing: Doing quality assurance during scanning
[23:00] – Measuring ROI: From patient outcomes to institutional investment
[25:00] – Multi-agent frameworks: AI agents collaborating in real-time
[28:00] – Narrow AI vs. General AI and orchestrating narrow tools
[30:00] – Real-world applications: Diagnosis generation via AI collaboration
[32:00] – Virtual & Augmented Reality in pathology training: From smearing to surgical simulation
[35:00] – AI in drug discovery and virtual patient interviews
[38:00] – Scholarly research with LLMs: Structuring research ideas from unstructured data
[41:00] – Regulatory considerations: Recap of episode 5 for frameworks and guidelines
[42:00] – Recap and future updates: Book announcements, giveaways, and next steps
Resource from this episode
- 🔗 Modern Pathology Article #7: AI in Pathology ML-Ops and the Future of Diagnostics
- 🛠️ Tools/References mentioned:
- QPath (Free Image Analysis Tool)
- Techcyte & Aiforia for model development and deployment
- PromanA for edge computing and real-time QA
- Model Cards (Pathology-specific metadata structure)
- Apple Vision Pro, Meta Oculus, HoloLens for VR/AR learning
- Dr. Hamid Ouiti Podcast on software failure in medicine
- Dr. Candice C
Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!
AI in Pathology ML-Ops and the Future of Diagnostics
[00:00:00]
Introduction and Greetings
Aleks: Good morning. Welcome. My digital pathology trailblazers at 6:00 AM in Pennsylvania to. Join me for the last live stream of our seven part series from modern Pathology. And I see you, I have missed you because I've missed a lot of live streams. So let me say hi to you in the chat and you just respond hi back and let me know where you're tuning in from and what time it is there.
It's 6:00 AM in Pennsylvania, so like our normal time. But I will be going to Poland soon, so for me, the time is gonna change. Yeah, let's dive into it. If you're here, just let me know that you're here. And this is our last session, not our last session, just last session of this modern pathology series, which here is everything working.
Let me do a tech check here. I [00:01:00] always forget like. How to click everything in the software when I don't do it for some time. Let me know where you're dialing in from. Is it drawing? I. Yes.
Series Overview and Today's Focus
Aleks: So for those of you who may be here for the first time, although I doubt there's anybody here for the first time, 'cause our topic is very specific.
We were going through the seven part series in Modern Pathology. Now let me show you, they have a super cool graphic and we are here. We are at the end. This is our last paper that we're gonna be discussing, and it's called The Future of A IML in Medicine. We're actually gonna be discussing two aspects today.
One machine learning and AI operations, and the other one is the future. So all the. Stuff that we can imagine that can happen with ai, virtual reality included and different things they discussed in this paper. And that's [00:02:00] what we're going to go through. So I don't know if the comments are not coming through.
Sometimes I will see them on LinkedIn later, but keep commenting, keep saying me. Where are you tuning in from? And and let's dive into our topic.
Machine Learning Operations
Aleks: So the machine learning operations and I see more people joining. Fantastic. You can comment you can chat with me. Let me know if you see my chat and I can bring your comments to the screen.
So if there are any questions, you can answer them live. I'll show them and we can discuss also in the chat sometimes the best episodes, the best live streams were when people started discussing in the chat. That was cool. So machine learning operations, and I had here this little asterisk, asterisk with word.
Ma machine learning graveyard. So obviously this is something cool machine learning, and I talk about the image analysis right now [00:03:00] you can, everybody can now get the commercial tool and start developing image analysis models, and then where are you gonna do with those models? How are you approaching?
There is a there is a. Place to play with it, but then if you actually wanna use it, there is a method to this madness that is gonna help us prevent this machine learning graveyard. There are several different algorithms that I was involved in developing that like never.
Went anywhere. I put a lot of time annotating them and nothing happened to them, which is part of the of life, right? So not everything gets deployed. And I would just wanna say hi. Hi Brandon from London. Fantastic. So it's like 11, 12. What time is it in London? These.
Machine Learning Platforms
Aleks: Machine learning platforms.
Machine learning platforms is something that is like a framework somewhere [00:04:00] where it's a software, but it's somewhere where you can you can utilize it as a computational pipeline and you can develop pretty sophisticated algorithms in there. And. In those platforms, you can standardize and automate the stages of machine learning model lifecycle.
I like this lifecycle term. So I. You can then develop your model, everything from development through validation, deployment, and performance monitoring, which are the components that have to happen at 11 o'clock in London. Okay. That have to happen if you wanna have this operate operationalized.
Operationalized. From this operational perspective, machine learning platforms are these centralized ecosystems in which data scientists, engineers, and analysts, among other stakeholders, so developers, physicians [00:05:00] pathologists, radiologists, whoever is taking part in this in machine learning operations, can interact, collaborate, harness data-driven insights effectively.
And these platforms typically integrate a suite of tools and services designed to streamline the entire machine learning lifecycle. You are serious about machine learning operations, then you'll think of the machine learning lifecycle and how does that work? And it works on a macro scale.
So if you have really like operation, but it also works on a micro scale if you are developing something in a commercially available, available software. And there is a question. Can you send Uur LURL machine learning platforms? Is it free? So there are several ones that are free. For example, Q Path and Q Path.
I'm gonna put it in the chat. It's an image analysis software. [00:06:00] It's an example of a platform there. I dunno how many there are. There was one that was free, but I dunno if it's still available and I will think what the name was.
Developing and Deploying Models
Aleks: The DevOps the operations, you are designing the model, then you develop the model, then you put the model into operation and that this is not okay from here.
It is as well, but not only to here, but you go back from operations to developing and you go back from operation to designing pretty often. It's a very iterative process. So when you design it both on micro n macro scale you need to let me do this one. A different highlighter, a certain criteria and requirements prioritize ML applications.
Think of a commercial offering. Harvest your data. Data is tough and [00:07:00] good data is even tougher than data, but we have it. Just the harnessing and harvesting is difficult to have something structured. Then when you develop a machine learning model. I love it. I love their images. They're so cute. He's developing an AI robot.
You prepare and process data, so you know you harvest it here, then you have to prepare it. If it's not prepared, it's not gonna be useful. Then you develop your different features, train your model experiment, which is an important part. There is place for experimentation. And then you analyze and reflect on your performance.
And if it's good enough and to check if it's good enough. There was a separate paper on performance metrics, so you can go back to that one. By the way, is anybody here who was here for all the live streams? I. Who is here, who has, or not only live, not live, [00:08:00] but has any, is anybody here who has seen all the other live streams?
You are amazing. Then if you're here, let me know in the chat. And I have to think of some special prize for this person who has seen all the seven live streams. And if you haven't, you can go back and do it. So putting machine learning model into operation. So once it's good enough you. Do the initial deployment, and then you have continuous integration and continuous deployment workflows.
So you have something that you have to integrate into everything that you have that you have in your lab that you develop previously. And I have seen some cool developments at uscap at ASCAP in Boston. In Boston, the conference of United States and Canadian Academy of Pathology.
I have seen companies collaborating. Specifically I was doing a podcast with Tech site. Tech [00:09:00] site has a comprehensive machine learning platform for. Diagnostic purposes. For pathologists, for clinical pathologists, for anatomic pathologists. And I don't see anybody who's been here for the seventh time, but I have somebody, I have Victor who is here for the first time.
Thank you so much. Welcome. Feel free to ask questions. So tech side has this platform where a pathologist goes in and does their job, but there, there is another platform, a Foria for image analysis development where you can develop image analysis models and now they have an integration. You can develop something in euphoria and do this.
Dev DevOps cycle and then deploy it in tech site in a non-research environment for pathologists to use in that institution. And the institution was Mayo Clinic. I thought that was super cool that you can now use this tool for developing and then deployment as well. And they have all the integration.
So once it's integrated, then you have to do [00:10:00] surveillance and monitoring. So assess data drift and. Trigger, how do you deploy automatically? How does it get used? Because if it doesn't deploy, doesn't like, have triggers to get deployed the users are not gonna get access to the to whatever this model is doing, right?
I'm not gonna stop my pathology workflow and go click search for something. It has to be shown to me. It has to be already deployed.
And there're important components of these platforms. Different frameworks, software frameworks. So this is, we are very much here in the software development world and in the software, in the computational science slash software development work. And there are principles of software development and they apply here as well.
By the way there is a cool podcast that I just published [00:11:00] recently with Dr. Hamid OU from Mayo Clinic, and he has a super cool perspective on ai. And he says it's not quite ready, even though I'm here. Speaking about it for the seventh week in a row, it wasn't in a row, but for the seventh week about AI potential in pathology.
He says everything that's out there requires some more work. So you can check that on YouTube or on the Digital Pathology Podcast. And he talks about why did they even mention him? Because he talks about how software fails, how software should fail, how software should fail in the medical space.
And he goes a little bit deeper into that. The principles that should guide AI development for pathology and for medicine. Really cool. Like a non nonsense approach. Say it as it is, se say it where it is in terms of advancement. Where we are. I like this type [00:12:00] of conversations that are not sugarcoated.
So go ahead and check it out. And in the meantime, let's talk about these platforms. And so we will use various tools, libraries capabilities to support the entire life cycle. And these environments will support experimentation with different models, model architectures, and setting to optimize model performance and validation of models.
Using variety of performance metrics that we have in let's check which number. Performance metrics. We had statistics in medicine number four. Livestream number four was on performance metrics. And then there are also guidances that you can apply. But. This is what you will need to know and you will need to use when you're developing the, these platforms or when you're using [00:13:00] commercially available platforms you'll have less development, but you still will need to validate, assess the models and things like that.
And it would be fantastic if things would be standard based. At some point in this development cycle, it has to be standard based.
Standardization and Best Practices
Aleks: And what they say here in this paper is that, broadly speaking, machine learning operations, standardization is any enterprise in any enterprise involves adopting best practices, model development frameworks.
And tools that are shared and are well understood within the enterprise, facilitating CoLab, collaboration, reproducibility, and scalability of workflows and reducing errors. So it has to be state of art stuff if you actually wanna deploy it in an institution. What do I mean by state of art? Like you cannot have gaps.
In the methods, in the frameworks, like this [00:14:00] is already a commercial product, like in the research space. Often different research areas are siloed and you develop a very very detailed, very deep expertise in something. But the. Things are surrounding your expertise, you don't have so much expertise in, and you might be using outdated methods, updating whatever, right?
That cannot be the case when you wanna use a machine learning platform for medicine. And that's why this is a multidisciplinary and multi expert endeavor. So the things that you're gonna hear being mentioned, and if you're already in this space, then. You know about this, but I'm a pathologist.
So for example, version control systems like GI containers, containerization like docker orchestration tools, and data and metadata mapping. We're gonna talk a little bit more about this. Version control system is something that controls versions, right? You had a version one that did [00:15:00] one thing and then you improved something.
It has to be version two. Next version. Next version. And people who develop software know that people who work in a regulated environment with software know that version control. And also audit trails. So trailing all the accesses times of access to the software are features that need to be included in these platforms.
So the data and metadata mapping takes us to our, one of our favorite subjects, dicom. Digital imaging and communications in medicine, it's a standard that is like a data map for medical images and whole slide images are also can be mapped like that as well. So what does that mean That we're gonna.
Need to map them with the laboratory information system as follows tissue. For example, if it's a prostate, it's gonna be prostate specimen biopsy stain, H and e file type [00:16:00] dicom. So you have to map these metadata. This has to do with the triggering of those systems because. If a certain piece of information is available in the metadata, then a certain model can be triggered.
And if it's missing, then it's not gonna be triggered on the sample. Yeah, so for example, if we want to automatically analyze all the prostate biopsies with a certain algorithm, there is gonna be a trigger. It doesn't have to be, this doesn't have to be the trigger these mapping, but it is a very easy one, you can deploy, for example, an image analysis on all the samples and do tissue recognition. And when you recognize prostate, then that's your trigger. That could be a trigger but it's a very computationally heavy trigger because you have to perform the computation before you even I. Get you you have to perform one computation before you get the trigger to another computation, which is image analysis of the sample.
Here you have metadata, so it's like easier, [00:17:00] less computationally expensive. I. And there is something for machine learning models that is also mapping, it's called model cards. And these are standard structured documents that can be used to communicate key information about a machine learning model.
So let me show you a table of this. This is a table of a model card in pathology. So we have the on the left of this table, this is table. Does it say which number of table it is? No, not in the version I have. And I have some pre versions doesn't matter. Anyway, so we have the component of a card, and then we have a description and component is basically gonna be a piece of information.
So like model name, model description pathology, laboratory domain, disease condition, biological specimen testing methods. And so on. And we have here 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, probably 25 different [00:18:00] components. So for example, model name, can I get rid of my, model name is gonna be the unique identifier. Assigned to the model. Description is gonna be a brief summary of what this model is doing and functionality of it. Pathology, laboratory domain specific area or laboratory medicine the model addresses, for example, histopathology, clinical chemistry all these things.
This is air. Machine learning model card. So it's like a map with different metadata about the model that can then. Map with other stuff and trigger deployment of a model.
Deployment Strategies
Aleks: And what about our deployment strategy? Yes, you'll need a strategy for deployment, and that's probably even more difficult than developing this whole model.
So deployment strategy is gonna be operation. Operationalizing a machine model, and this is gonna present a pivotal [00:19:00] decision around cer certain deployment strategies. What are we gonna be talking about on-premise cloud and edge computing? On-premise deployment, ki we know what that is.
It's on-prem, unlike your server computer and the, advantage of this is that direct control over data security and compliance. This is when I hear discussions about it. This is the one that is like the main advantage. You have control of it. Regulators come, you can show where it is. You can put your firewalls, all that stuff, right?
Whereas cloud deployment, the advantages is scalability and elasticity. You are scalable. You can. It's in the cloud. You can have, use more or less computational power on demand. You don't have to like, you can make it a lot more flexible than your stuff on premise. What is edge computing?
Edge computing? It. Like [00:20:00] computation on machines. And this has started to emerge as an efficient process to compute in tandem with clinical data generation. So let's take our whole slide image. The whole slide image is being generated on the scanner and you. Compute stuff on the host light image. So for example, you can already detect a nuclei.
If you need to count something, you can do quality control or and one company that does that is promana and they call it, they don't go to quality control. They call it quality assurance because if you do it while you're scanning. Then and do something before anybody sees the scan, you assure quality.
When you do quality control, they say, then you've already generated something that was not good quality. So you like add work. They call it quality assurance. And they say, okay, you can do it while generating your data. And the advantages is that it can minimize latency, enable localized processing of [00:21:00] data, and it can.
Allow you maybe a little bit more flexibility for checking things, deploying different algorithms because you are doing it in the same place where data is generated. Then you have a data package that can securely go further in your pipeline and not like having them generated some data and then using an algorithm that maybe not be secured.
You, you do it before and then you take care of the security while you're transferring your data with everything. I hope it makes sense to you. The problem is that it's new and. Why is it a problem that it's new? It's not not everybody has it.
Challenges and Integration of Edge Computing
Aleks: Only few providers have it. Only few people know how to deal with it.
Once we figure out how to deal with it, once we figure out how to integrate, how to leverage it, that's gonna be something that's gonna be, one of the, it already is one of the deployment strategies, but it's [00:22:00] more like on the edge side, edge computing. What did I wanna say?
Historical Anecdote: New Technology and Adjustments
Aleks: So there is before the nineties, Poland was a communist country and not that.
High resource country now. It's like a normal resource country. And I remember this show where when somebody bought a new washing machine and it broke and the technician came and it's of course it broke. It's new. What are you expecting? There was always adjustments to new stuff.
It's new so it broke. I don't know if. It probably is not as funny for you as it is for me, but maybe some people can relate anyway, it's new, so it breaks. You have to figure out how to like how to troubleshoot it, how to integrate it, and then you can leverage the whole benefits that come with this tool.
In this case, edge computing.
Measuring ROI in Digital Pathology
Aleks: And then we have to figure out how will we get, or we have to figure it out ahead of time. [00:23:00] What, how will we measure their return on investment because it's a significant investment. Mostly like organizational resources, time. Effort, obviously the tools that you have to pay for, but it's basically like an operation to have that stuff deployed in an organization and then you can reap the benefits.
But you have to measure your ROI re return on investment and you can measure it by assessing improvements in patient care related outcomes, operational efficiency, cost savings, business outcomes derived from deploying machine learning models. A lot can be gained when we enable automation and SCA scalability of machine applications.
So we have to think why are we doing it from the business perspective as well.
The Role of Experts in Digital Pathology
Aleks: As we said at the beginning, digital pathology, machine learning, AI and [00:24:00] pathology, multidisciplinary multi expert endeavors, and the institutional economics expert are part of the endeavor, part of the decision making, team pie, part of the evaluation team.
Was it successful or not? And obviously I love what Bren says. Bleeding edge. Yeah. Bleeding edge, exactly. It bleeds, but then it stops bleeding and it works and is stronger. So that was about the operations. And the. Main message is, put some thought into it and get the right experts on board, and otherwise it's gonna fail.
And now let's talk about the future and if you have any questions, let me know in the chat. I see, I don't know if I see all the comments. I definitely see some from YouTube and some from LinkedIn. But let me know where you're tuning in from if you just joined and what time it is.
Introduction to Multi-Agent Frameworks
Aleks: And now we are gonna talk about some science [00:25:00] fiction now about reality, future reality.
The future is now. So let's start. Multi-agent frameworks are multimodal. Yeah. Yeah. Let's talk about multi-agent frameworks. That's a relatively new concept. I learned about it last year actually. I listened to a presentation by Model ai and they have models and they have agents and agents are actually an agent is a model.
And then you have multiple of them passing information to each other. So in our publication here. They say that multi-agent frameworks represent a sophisticated approach in which multiple AI models interact directly with one another to achieve complex tasks and optimize decision making process. And there are frameworks like auto Gen and crew ai.
They're designed [00:26:00] to mimic collaborative behaviors, seen in human teams, and. Each agent contributes specialized knowledge and skills to share to shared objectives. And they differ very much from the fundamental fundamentally developed. These frameworks differ fundamentally from traditional or single model approaches by offering diverse perspective of identical input and, why is that? Like why do we need to have so many if we decide to have them?
Narrow AI vs. General AI
Aleks: Because, and there is this concept explained a little bit farther the general intelligence basically we don't have anything with general intelligence. We have stuff that is. Narrow, weak ai, it's still very powerful, but for just a specific task.
So artificial general intelligence is designed to to theoretically, and I highlight it theoretically here perform any intellectual task that most human [00:27:00] beings are able to do, and we're not there yet. And I think I started like talking about it. Maybe four, five years ago. And there's this division of different artificial intelligence.
One is narrow, which is all the algorithms that we have. Then there is general was one of them, and there was something else like different stages. And the general is still not there.
Applications of Multi-Agent Frameworks
Aleks: But what we have are agents, different agents. So we basically now can orchestrate that deployment. Sorry, lemme make myself a little bigger.
We can orchestrate the deployment of various narrow ais and not have to manually pass data, pass outputs from one model to another, because then you have to also hire a full-time person to do that. We don't wanna do that anymore. We want these ais to communicate with each other, have dependencies, have failing [00:28:00] graceful, failing mechanisms.
In place. And these are these multi-agent frameworks. And it's important to distinguish them from multi-model models. So multi-agent frameworks are these different uhis talking to each other whereas multi-model models. They integrate diverse data types like imaging, text, and genomic data.
Historically, you had uni model models because we didn't have the capabilities. You would do one model for text, one model for image, one model for something. Now you can have a model that deals with all different types of data, and then once you have that model, you can put it into an agent framework.
You can make an agent out of it, and then. It's gonna communicate to other agents. So for example, there's gonna be an agent that retrieves images for this model. The other one, retrieves text, the other one, whatever, the other genomic data. Then they bring it to this one, Multimodel does something, and then another one writes a report.
So let [00:29:00] me show you an image on that how this is envisioned here and how this could happen. So we have the inputs layer. So we have medical history. So here we have different data. We have radiology, imaging, patient samples information about the patient, like lifestyle habits, right? And this is all the data and then we have.
Agents layer and we have these agents can work with the data and like retrieve data, but they can also help the medical professionals act actually doing the work and generating the data by helping schedule appointments, helping them actually get the report on time sending automated emails that something is ready so that they don't have to go and check and.
Have some kind of notifications even maybe deploy some machines, but basically they can help coordinate that work of the team. And our output [00:30:00] from these agents is gonna be what we usually put out, which is gonna be the diagnosis where treatment plan, final report. But we basically augmented our whole work with.
AI or AI agents, let me just call them agents, that's what they're called. So that's one thing. And we already talked about general intelligence that it's still a theory. And despite the promising potential of future with artificial general intelligence integrated, it is currently not part of our I AI arsenal or tools to employ most of our current AI applications.
They are narrow ai and that's okay guys. Like it's fine. We can orchestrate our narrow ais. We can still. Control them. Have the human in the loop and basically make sure that we're [00:31:00] providing the best care we can. And. Agents are actually not that like futuristic, they're already happening. There are companies doing this on the side of consumer ai.
I have been looking for some agents to streamline the educational content creation. It's not where I would like it to be there is no out of the box tool that I have seen that is you can click the button and it's gonna make you a, a podcast out of a YouTube video and things like that.
As there is still a lot of manual work involved or very targeted, specific development work. But that has been the case with all the technologies that now have softwares that are. Stuff that you can click, including image analysis software. So I'm waiting for this to be available. And let's see when it happens.
And when it happens, I will probably test it out and let you know what I thought. But.
Virtual and Augmented Reality in Medical Training
Aleks: Virtual reality and augmented [00:32:00] reality. This is also not new, but it seems a little bit more futuristic to me. I'm not doing the virtual reality world exploration. I think Facebook and like different platforms, people who do gaming.
If you do gaming, let me know in the chat, then I can talk to you how it works. But basically it looks a little bit like from Mission Impossible, which already is a movie from, I don't know, probably. 10 or 20 years ago. But you have these sets, the visual sets, and then you have some sensors on your on like your head, on your hands that orient you in space virtually.
And this is perfect for training, for learner engagement. And you can basically provide an immersive learning experience to. Without having to do this stuff to learn. So for example surgeries you can do like that. You can do virtual surgeries. You can [00:33:00] for example, have people train virtually on blood collection or anything that requires patient and only let them do this on the patient when they have passed a certain a certain skill level on this virtual, training system and you can set it up that it actually like monitors all the movements or the things that you would need to do if you were doing it on the person, but you don't have to practice on the person. So that's definitely a lot better for the patient. You basically, let the patient be assisted by a trained professional.
You don't have to let or like the inexperienced training on on people is minimized, which is fantastic. And. We already said about the, oh, for example in pathology, something that could be done virtually would be a virtual fine needle aspiration. Clinics can simulate patients experience needle localization procedures, and even smearing on glass slides [00:34:00] using virtual handheld device tracking.
So this all could be done virtually as superal. And at conferences you'll see especially image analysis. Software, this is all software companies. They might have headsets. I envision that in the future, in the glasses, the smart glasses, I don't have mine on, but the, these meta glasses with the camera, the next version will have a little display.
So it's gonna be possible. And then you're like. Move your screen in front of yourself without actually having a screen. Super cool.
AI in Drug Discovery and Development
Aleks: Also something that can accelerate drug discovery. Identifying potential drug targets and repurposing existing drugs for new, the therapeutic uses. I think I saw this already.
That's not virtual reality, that just ai for the future. I saw this for predicting how proteins will fold. I saw this for designing antibodies [00:35:00] and for different biotech procedures involved in drug development.
AI Assistants and Gamification in Education
Aleks: We can also have something called virtual assistance. And it's an AI assistant, virtual assistant can be, I know the, this term as somebody who's not in Europe.
No, I don't know. Not like in your place. Like you can have virtual assistants from abroad that help you, but now you have can also have an AI virtual assistant. And this could simulate patient interviews in medical training, offering realtime feedback and responses based on user inputs. And now with LLMs, you can totally have it.
You can conduct an interview and you have a program that is supposed to have a certain condition and already knows the what this. Condition entails and also is pre-programmed how they should behave. Maybe like you need to ask specific questions. The assistant is trained not to reveal information before you actually ask so that you can hone on your, [00:36:00] patient interview skills and obviously something that is omnipresent in all the apps is gamification. And that by analyzing learner performance data and dynamically tailoring change content content to feedback. This has been out there in learning apps for a long time. So the app I'm using that definitely uses it.
Whenever I pause because I'm trying to make myself bigger is Duolingo dual lingo for language learning where you like, have all kinds of games like you can invite the friend and then nudge them to actually learn it's fun and then it learns. What you do well, what you do not do well, and it adjusts the learning to however your progress is right.
The, it employs something called spatial repetition, so it brings up the stuff that you know well at [00:37:00] the, less with a less frequent cadence and the ones that you still need to work on with a higher frequency cadence. So perfect for like tests and things like that. There are different apps that employ that already.
Quizlet I love it. It's backed by science how. We get engaged how we won the progress. And I'm a little bit fascinated by how people learn how adults learn specifically when you have a whole life to handle, in addition to whatever you wanna learn. And especially if it's something that is not crucial for you.
If you must learn something, you will, if, like your salary depends on it. But if it's like. Like I don't have to learn Chinese. I don't know if I will ever speak it, but I decided to give it a try on Duolingo and I am on my 102 days streak. So yeah, I can say a couple of words. I'm not gonna be showing off here 'cause I.
I'm still a little embarrassed. But that's that for education, for drug development. [00:38:00] Oh, something I love and I wanna leverage more to basically publish more. 'cause my main avenue of communicating these things is online YouTube, super low hanging fruit, but also super low hanging fruit in terms of the.
Input from other people in terms of like peer review in terms of polishing you and publishing it in the in the scientific world, right?
AI in Scholarly Research and Publications
Aleks: Now you can very much leverage AI for scholarly research. Large language models have emerged as valuable tool for literature, search, summarization and synthesis, and obviously it is semi polarizing.
Topic. Because when we talk about education and students pupils using AI for everything, it's okay, are you still thinking or are you just like pay copy pasting into those chat bots? In the research world. The journals that [00:39:00] I read mostly basically require you to disclose that you use this AI tool, what you used it for, what like at which stage of publication you used it for.
And there's a learning curve. I was a reviewer of a publication that where the authors were assisted by ai and it did sound like ai, but. Because publishing is a peer reviewed process. They got feedback from the reviewers, me included that, Hey, the content is good, but work on how you deliver this content.
It cannot sound like a cheap version of chat, GPT. Let's make it sound like you are like an expensive version. Anyway, so you have these tools for, researching your literature for actually for outlining content brainstorming. Like the thing that is super powerful for me is you can now gather unstructured ideas in a meeting, in a brainstorming session from people.
You have a transcript of this and you don't have to manually [00:40:00] go through it to structure it. You can have. AI structured, and then you go already through the structured version and you can give it to your collaborators for another round of comments and another round of comments. Until. We decide, we don't take any more comments, but in any publication it's a lot of rounds of these things.
So I'm excited about this. And actually a fellow veterinary pathologist, and Dr. Candice too, she perfected workflow for this. And when you google her. Let me check if I can Google her framework for you. Or I will put it in the show notes. She has a cool framework how she writes publications with AI assistance to be a lot more prolific.
Regulatory Considerations for AI in Medicine
Aleks: So that's super cool and obviously I. This discussion wouldn't be complete without mentioning of regulatory consideration, but we're very lucky because in this [00:41:00] series, in the publication number, let me check five. We have regulatory aspects of ai. Machine learning in medicine, and if you are thinking about the regulatory considerations, frameworks, guidelines, you must go to number five because when I was reviewing this and whoever was here on the live stream, I had no idea how many guidelines already out there.
You don't have to reinvent the wheel. Go to livestream slash publication number five, regulatory aspects. And you will know what to take into consideration which frameworks you have to be aware of, what are your guidelines, wherever you live, wherever you work. And, but before we go let's look at another image.
Virtual Education in Pathology
Aleks: So this is virtual education in pathology. Super cool. I love the images, virtual reality. So learner is [00:42:00] completely immersed in self-contained digital environment. So you know, it's digital, but the skills are real. So this is super cool because the material you are using is virtual. It is it's.
Magic. I don't know. I'm just super excited about it. But the skills are real. So if you then get real material, you have the skills. This is so cool for me. So you have these goggles that they have, and the, you have remote controls usually needed to control virtual hands due to total enclosure in this environment.
And there two approaches. There is this virtual reality and augmented reality. For example, virtual would be meta oculus, quest. Valve index, HTCV and augmented is Apple Vision Pro. Microsoft HoloLens X Real Air two. I think I used Apple Vision Pro once and I don't know if anything else, but you can do virtual [00:43:00] reality cadaveric dissection, virtual reality histology learning.
This is super cool. And then in augmented reality, you have you can also have both, but then it's like a combination of your real world and the digital. So augmented reality, the learner interacts with holograms, whereas in virtual reality, you become like an avatar and you enter that enclosed world.
So they are superimposed. The holograms in augmented reality are superimposed. On into the real environment and yeah, you have different goggles and you can have the possibility to use one hands as controls. And these were the Apple Vision Pros. I checked them at a conference at the booth of a company called yeah, I'd like to.
I'd like to investigate if they have histology learning and virtual reality.
Conclusion and Future Updates
Aleks: So yeah, [00:44:00] this concludes our seven part series and it's not actually my series. My series is just the livestream series, but it's modern pathology written by a team. From Pittsburgh that have an AC center of Excellence for AI and Digital Pathology.
If as some of you are here for the first time, then maybe you wanna learn more about digital pathology. So I would recommend you download my book, digital Pathology one-on-one the Digital Pathology, one-on-one book.
I also have what in the, in a normal version, let me just stop sharing this in a paper version, you can buy this on Amazon or just take it take the free PDF, it has an AI chapter and what I'm gonna be doing, and if you already have this book keep it because you're gonna get an updated version.
Everybody who has the book, I have a list of everyone who's there will get the updated [00:45:00] version where the AI chapter, let me see what chapter that is is gonna be updated with. Yeah, chapter three specifically, and the whole book, because I published it in 2023. Hey, digital pathology time flies faster.
I'm gonna be updating it and if you have the old version, so if you now download it, it's gonna be the old version. Doesn't matter because. Everybody who has the old version is gonna get the new version once it's out. So thank you so much for joining me. Thank you so much for staying till the end. It means you are a true digital pathology trailblazer, and I talk to you in the next episode.