Behind the Scenes Episode 389: The Current State of AI in Healthcare


Welcome to the Episode 389, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

While generative AI is getting all the press in recent years, the practical use cases for AI, Machine Learning and Natural Language Processing have grown exponentially – particularly in the Healthcare and Life Sciences verticals.

I’ve also resurrected the YouTube playlist. Now, YouTube has a new podcast feature that uses RSS. Trying it out…

I also recently got asked how to leverage RSS for the podcast. You can do that here:

The following transcript was generated using Descript’s speech to text service and then further edited. As it is AI generated, YMMV.

Tech ONTAP Podcast Episode 389 – Current State of AI in Healthcare
===

Justin Parisi: This week on the Tech ONTAP Podcast, we discuss NetApp’s role in the current state of AI and the health care and life sciences industry.

Podcast Intro/outro: [Intro]

Justin Parisi: Hello and welcome to the Tech ONTAP Podcast. My name is Justin Parisi. I’m here in the basement of my house and with me today I have some special guests to talk to us all about AI as well as what it’s doing in the healthcare and life sciences industries. So to do that we have Brian O’Mohoney.

So Brian. What do you do here at NetApp, and how do I reach you?

Brian O’Mahony: Hi, I’m Brian O’Mahony. You can reach me at O’Mahony@netapp.com. That’s O-M-A-H-O-N-Y at netapp.com. I work the Industries Group, and I cover AI for all industries. That includes healthcare and life science, financial industry, the public sector, Media and Entertainment.

But today we’re going to talk about healthcare.

Justin Parisi: All right. Excellent. Also with us today, we have Tony and I’m going to, I’m going to try it. Chidiak.

Tony Chidiak: You got it. First try. No correction.

Justin Parisi: All right. Excellent. All right. So Tony, what do you do here at NetApp and how do we reach you?

Tony Chidiak: Yes I cover strategic globals for AI and analytics across all industries.

And I help our customers deploy data management platforms to be successful in their AI endeavors. And you can reach me at anthony.chidiak@netapp.com.

Justin Parisi: Like I said, we’re here to talk about AI, as well as what it’s doing in the healthcare and life sciences industry. But first, let’s talk about healthcare and life sciences.

There’s a lot in that industry, and it varies. So let’s cover that. Let’s cover the generalities and the specifics of that industry.

Brian O’Mahony: Yeah. Within industries there’s verticals, so for healthcare, you have, your doctors who provide healthcare to everybody.

And then you have the payers, the insurance companies. And then you have the life sciences. It’s a very massive industry, not to mention healthcare in the government sector. It’s wide and covers lots of different areas.

Justin Parisi: So as far as the specifics of healthcare and life sciences, let’s talk about where those are applied and what sort of data workflows there are there.

Brian O’Mahony: Yeah, so data comes in lots of different ways. In healthcare, imaging is probably the biggest, I think 80 percent of the data generated in healthcare would be in the imaging sector. And most of the AI work done in that industry would be radiology, cardiology.

And that’s been a focus. It’s essentially helping the radiologists identify potential issues and treatments. In the EHR space, Affordable Care Act data is stored in databases And now the ISVs in those spaces, like Epic and others, are starting to pull all that data together and build models to help in significant ways in the healthcare industry.

The providers who turned into data entry clerks, Now can use AI to alleviate some of the pains in that space.

Tony Chidiak: And this being such an important topic, obviously, all related to our health in some way, shape or form, whether it’s drug discovery or imaging or even on the payer side, just the real cost that is applied to needing certain healthcare. This affects our daily lives in all kinds of way. And as Brian alluded to there, there’s no shortage of data. So how do they do that? How are they utilizing it? There’s a lot of ingest points. And then we’ve seen, of course, a lot of these healthcare and life sciences companies rely on the cloud as well.

But the data doesn’t start necessarily at the cloud. So, there’s a lot of conversation points, a lot of movement points that needs to be taken into account.

Justin Parisi: Yeah. And with the data conversation, it’s interesting because I feel like that data in these industries and other industries has been one of those things that people just kept around and they’re like, well, we’ll need it later.

Right. And now we’re reaching that point where we need it, where we’re figuring out how to actually take that data and use it in productive ways.

Tony Chidiak: Exactly, and I know some of the challenges that we have because there wasn’t always a global plan across the company’s footprint for data.

They might have some of it, but it’s only as good as the accessibility we can get to the right people like the data scientists to make actionable use of this data. So that’s probably one of the biggest challenges. And then you have certain groups where it’s turned to very siloed actios across clouds So you have 20 to 30 AI projects that could be very impactful, but a lot of the data that they need might be all over the place.

So I think that probably sums it up. Brian, what do you think on that?

Brian O’Mahony: Yeah, we’re at an inflection point. AI has been around for 50 plus years, as you guys know And it hadn’t made a lot of progress because of a couple of things, right? The lack of data and the ability to be able to process that.

Now we’re at this point where what used to take 15, 20 years to develop. AI models can be done a lot quicker today. So we’re at this point where we can develop models in a very small amount of time. There’s some great progress with chat, GPT, obviously, that was an explosion everywhere.

But we have models now that can pass medical exams, and Google has that online where they’ve successfully passed 67% surpassed human capabilities and more recently passed medical exams beyond that So you can go online and get medical answers from AI.

Tony Chidiak: Yeah, and everyone’s watch and phone these days is a walking medical device that’s with you all the time, grabbing and producing more and more data that’s specific to each customer and that all feeds back into Healthcare companies. So how can they eventually use that? How are they utilizing that? What’s their plan? And again, how do you get all the right data to where it needs to be to actually compute it in actionable insight that helps the individual’s everyday life and their own business of course. ’cause there’s always an ROI to all of it.

Brian O’Mahony: And if you look at the analysts, they’re telling organizations that if they don’t jump on the bandwagon for AI and generative AI, they’re gonna be left behind. and potentially go out of business. So, it’s top of mind for everybody.

Everybody wants to talk about it. But we can get into later why it’s challenging for folks to kickstart their AI projects.

Tony Chidiak: Yep. And the other piece is too, to add, you know, with all this is the PPI side of the data, the security of the data obviously puts another challenge that companies have to navigate because of the private side of all this, and how do they anonymize it, what’s the right level of that again, to protect clients, but also help clients at the same time.

Justin Parisi: So you mentioned that AI has been around for about 50 years, and I think that part of the reason why it didn’t really take off till recently was figuring out how to use it. Like what do we want to use it for? But I think the bigger reason is we just didn’t have the technology to do it in a cost effective way.

So now with better performance, higher density chips and the cloud, We’re seeing that capability available to more people and the ability to add compute where you need it to do those data crunching tasks. So what is your thought on the progression of AI and why it’s made such an explosion recently?

Brian O’Mahony: Well, GPUs is one. The availability of GPUs on prem and in the cloud, that’s a big factor. But, your AI is as good as your data. The ability to bring your data to AI was probably the biggest challenge. And also at NetApp that’s what we focus on. If you look at the AI pipeline and the data management along the way from ingest pulling the data in all the way through that pipeline to building your models. That usually starts as a proof of concept in the cloud or on prem.

Unfortunately, a lot of folks get into this siloed approach and as you go further along, that becomes very challenging because now you’ve got all this shadow AI going on. Now organizations are looking for more enterprise class capabilities where they can do AI on prem, in the cloud, be able to access the data more efficiently.

And be able to bring the data to where the processing is.

Tony Chidiak: The technology advances and obviously with NVIDIA, and the software framework to go along with it, but just the sheer processing power that we now have at the fingertips, along with, again, this explosion of data, given all the devices, it’s kind of the perfect storm to further the possibilities and move the art of the possible for us consumers.

Justin Parisi: So, AI does a good job of taking a lot of data and analyzing it and giving you some sort of analysis of that data. But, what we’re finding, and I’ve seen this in ChatGPT and other use cases is that the analysis that spits out isn’t always correct, right? Sometimes it sounds very convincing, oh yeah. And then you like start thinking about like. No, that’s not, no, that’s not right. So there is some overhead here when we’re dealing with AI of trying to analyze the analysis, make sure those are correct. So what are the challenges there? Do we have people having to do that or are we designing software to check the software?

Brian O’Mahony: There’s a number of challenges there. So, what you build your data with has bias and fairness built into it. You’ve got to be careful of that. AI suffers from hallucinations, where it thinks it’s telling you the right information and you’re not getting it. To be successful, building a model and putting it into production is one thing, but there’s challenges once it’s in production.

If you don’t put the right checks and balances in place to make sure that it’s actually giving you the right information, you always need that human touch in that cycle. to make sure you’re getting the right information. And there’s tools like RAG, which we’ve introduced recently, where you can take a large language model, and RAG is retrieval augmented generation, where you can augment that large language model with your own internal data, and give a richer, more proprietary response and answers.

Tony Chidiak: Yeah, I think that’s right. We’re at a point, too, where a lot of the things that you input into these large language models and it scratches your head. But I will say we’re at a state where this is only going to improve, just with the sheer amount of data that’s being utilized, the data being cleansed better, the data being input better.

And the more data and the more discoveries they found where there’s issues, that’s opportunity to clean it up. And that’s where, it’s only going to get better and better from this point on.

Brian O’Mahony: And people want to look at where the answers came from, like references to some of the documents and give them a line of that decision that the AI came to, some visibility.

That’s becoming more and more important as well.

Tony Chidiak: One that a lot of people could probably relate to or understand because it’s so popular out there is the autonomous driving. That takes several hundred thousand hours of data that they just keep going and going and going.

And in some cases, the Tesla can technically Drive by itself, it’s still not there yet, obviously, to go widespread and they’re still working and still adding and still model refining that data to make it more and more safe for all drivers. I think where things have changed because of this year of processing power and what we’re able to do with the data, this is just happening at such more of a rapid pace. 10 years ago, futurists would be talking about this and now it’s half put into practice.

It’s possible but it’s not all the way there yet. But I think we’re going to get all the way there more and more in our daily lives.

Justin Parisi: Another challenge I see here is the notion of data governance, HIPAA and that sort of thing, where you have to deal with anonymization of the data so that the AI isn’t pulling personal information in there with it.

So how are healthcare industry companies handling that sort of challenge?

Brian O’Mahony: That’s a massive challenge, right? The most sought after ransomware data out there is healthcare data. It’s seen as the most valuable. So you have to protect your assets all the way through. So having a common platform at every stage of AI that has built in security is critical, and not only on prem, but in either of the clouds. So as you move your data, you may be modeling in the cloud, you want to pull it back and do modeling on prem. Having a strategic single platform that ensures that you’re protected against ransomware, data protection and resilient along every step of the way. That’s a must have for healthcare.

Tony Chidiak: Yep, and I will add, there are more and more platforms out there that obviously help scrub the data and anonymize the data, which is certainly helpful.

The other piece that is a challenge, but is bringing these foundational models to the data. Basically, instead of having to push the data there, they bring the model to the data, which certainly cleans up that process and it’s something that NetApp does and can help with.

Justin Parisi: Alright, so some of the challenges we’ve covered so far, we have the data sheerness, right, just how big is that data? How many files are there? Your data governance, of course. Ransomware, compute. So what else are we dealing with out there? I know there’s a lot more that we have to consider when we’re talking about healthcare and AI.

Brian O’Mahony: Well, the other piece we didn’t talk about is the amount of time that the data scientist spends on actual data management. They don’t care a lot about data management. They need to use it. They need access to the capabilities. We have a DataOps Toolkit, which is a set of tools that enables the data scientist to do things a lot quicker. Accessing silos can take up to six months, right? Working through the IT department to be able to pull that data. When they have data and they want to collaborate. and share with others, data sciences are still copying data. And it could be terabytes and hundreds of terabytes of data. So to be able to do that, instead of weeks and months, in minutes is very, very valuable. So those type of things that all storage guys take for granted, being able to give that to data sciences in a very simple way is critical and saves a lot of time.

Tony Chidiak: Yeah, I would also add NetApp is the only vendor that can standardize or streamline a storage OS with ONTAP. And what I mean by that is we have true hybrid and all the capabilities and features, as the data gets more and more robust and you want to do more, are so vital to your overall AI operation. So when you do need to access data in a certain point, whether it be on prem, in a colo in the clouds, that’s where you can standardize on one storage OS and utilize all the robust features and capabilities we have to move data, to access data, to take snapshots, clones, anything that you’d really want to do with data, NetApp delivers on that.

So it does help streamline the entire process. I just think because of the way this thing has evolved so rapidly, there was all these silos created. Well, how do you take these silos? How do you, whether it be cloud or on prem, how do you standardize that? How do you have a way to access it all from one place?

And that’s where NetApp I think is far and beyond on the industry and data management piece to set apart from what can be possible.

Justin Parisi: One of the ways that ONTAP can streamline that is the ability to take multiple data protocols and leverage that against the same platform. So you can have a NAS share and you can have a LUN somewhere, and that ties into a lot of the modern healthcare software today. So Brian, I know you had a lot of experience with that.

Tell me how that looks in an ONTAP system.

Brian O’Mahony: Yeah, so we do file, block, and object for ONTAP, which is fantastic. If you look at healthcare for EHR systems, file and block is very important. When we look in the AI space, it’s predominantly file and object. and I talked about collaboration earlier, which is a critical piece.

You may want a hundred copies of the same data and to be able to share that within your extended AI team, you can do that instantly. And then depending on the tools that you’re using, once that data is cloned they can access that over SMB, NFS or object, depending on the tool that they’re using.

That is massive. That’s a game changer for that space, right? It becomes very complicated if you don’t provide the data scientists with the tools to be able to do that in an automated and simple way.

Justin Parisi: And then there’s the ability to move that data to different places with SnapMirror, right? You have your replication, and then you have your FlexCaches where you can actually localize that data without having to copy it all over to another place.

Brian O’Mahony: Now you’re talking. So that’s another magical piece. Imagine you have all this data and instead of going through the IT department again and moving all of that data, with FlexCache, you can make that data available right away. And that could be between on-prem and the cloud or on-prem and siloed, wherever that’s located. The data, it could be a hundred terabytes of data made available instantaneously to do modeling wherever you want that to be. That’s a huge time saver, cost saver. Reducing ingested costs to the cloud and back.

Lots of benefits there.

Justin Parisi: Yeah, you could have a data set that’s multiple petabytes on prem and then you could spin up some caches in the cloud and do your data analysis just on segments of that data. It doesn’t have to be that entire two, three petabytes. It can be maybe a hundred gigs, maybe a hundred terabytes, right?

So you don’t have to pigeonhole yourself into doing a giant data set like that.

Brian O’Mahony: To that point, we have a reference architecture in partnership with Domino Data Labs and Nvidia where you have ONTAP on-Prem, ONTAP in the cloud, you have a Kubernetes cluster on prem and the same in the cloud, and the cloud of your choice.

And we have demos that show that you might have that 100 terabytes of data on prem literally made available in minutes in the cloud, so you can start processing that data in the cloud.

Tony Chidiak: Yeah, and I’ll add to that. The sheer cost savings and efficiency that you get from that. I mean, with GPU scarcity and trying to get some of that horsepower, but if you have availability on prem or in the cloud, you have options.

There’s flexibility. It’s highly efficient, and then also not having to move the whole data set. You can move what you need to get it done.

Justin Parisi: Now, another thing that got introduced recently in ONTAP that helps the AI use case is the ability to write data to a NAS share and then turn around and expose that to object.

So the file object duality aspect. What are you seeing in the AI healthcare space with that particular feature?

Tony Chidiak: Oh yeah, multi protocol support has been massive because there’s also a lot of analytic platforms out there that the companies are using. So being able to have that definitely helps with all the analytics pieces. You mentioned names like Databricks and Snowflake that across the cloud are using a lot of objects. So the capability both from an on prem or a cloud to have that definitely helps move the needle and make it easier for data scientists.

Justin Parisi: Yeah, and a lot of these applications have been architected on a NAS platform, so if you’re doing all this with NFS or SMB already, and you have to move to object, now you don’t have to re-architect your applications. You just keep writing your data like you always did. And then you use file object duality to expose the S3 interfaces to this Databricks, Snowflakes applications to do that.

Tony Chidiak: Yep. No code rewrite as they say.

Justin Parisi: And again, it all does this without having to move anything.

So you’re just basically using the same data set.

Tony Chidiak: Mm-Hmm.

Justin Parisi: We mentioned autonomous ransomware protection as an aspect of what ONTAP can offer. Replication, multi protocol support, the S3 aspect, FlexCache. We didn’t mention FlexGroups, but that’s also kind of baked in with performance. Brian, what have we missed? What ONTAP features have we not covered already?

Brian O’Mahony: Yeah, there’s so much to ONTAP, really. You can go on for a long, long time. Everything is built into ONTAP with its capabilities when it comes to security and all the things that you mentioned. One other thing is, data becomes hot for a period of time where you want to use it and analyze the data.

I know data in healthcare is kept forever, right? But usually after seven days, it gets cold and it doesn’t need to be on all flash arrays. So the ability to be able to teir that off to less expensive storage, built into ONTAP, it’s huge savings in cost and that flexibility. There’s so many and that’s one of many that helps.

Tony Chidiak: And Brian, the other one that you might want to expand on too is the recent announcement at GTC where half of the world files as noted by Jensen during the keynote sits on NetApp. And with the integration that we have with their NEMO Retriever. Basically the ability for our customers in healthcare and life sciences and all of our customers to be able to essentially talk to the data.

I don’t know if you want to expand on that, but I think that’s very significant.

Brian O’Mahony: Yeah. That was a huge announcement. So much data already exists on NetApp. We’ve been around for 30 years.

ONTAP is AI ready today. If you’re already a NetApp customer and your data is there, you’re ready to go. And that was a huge announcement. There’s white papers on this, there’s demos on that, where it shows where you can take a large language model, you can ask it a question, and Justin, you talked about this earlier, you get this garbage answer back. What’s new in the next, generation of over 800 controllers, it’s not going to know. So you can point to where that data is and it can analyze that data and build an effective database so that AI can understand what’s in there and ask that same question, and you get a meaningful answer, a correct answer.

When you look at using AI, you can be just a taker where you just take off the shelf AI and just use it. You could be a maker, where you go and build your own models. But, that’s typically done by some of the larger organizations and research centers.

But most folks are in the middle, where you’re taking models that have been already built. And a lot of healthcare organizations are pooling their data together to build meaningful models and then taking those models and augmenting that with your internal data is the fastest way to get value from AI.

Justin Parisi: So where do you see the current state of AI today in healthcare? Is it at 50 percent completion, 90 percent completion, or are we just kind of at the very nascent stages of that particular use case?

Brian O’Mahony: We’re at the beginning. We’re three steps in a 10 car race as the AWS CIO mentioned. Most of the progress in healthcare has been done in imaging. But even at that, there’s only five or six hundred models built in those spaces for radiology and cardiology. Pathology is the big one in two ways. The amount of data that it’s going to generate but also opportunities for AI to do some work there.

But again, we’re at the very beginning. I mentioned earlier about the very beginning of EHR. With the Affordable Care Act in 2013, everything had to be digitized and it failed in a number of different ways. We talked about the data entry piece. A lot of physicians left because of the strain that put on them.

And the data that went into those EHR systems was not really leveraged to build meaningful models. That’s changing. Some of the bigger ISVs out there are now incorporating AI into their applications to solve those problems. Now you go to the doctor rather than him stuck on a laptop typing in all day. They’ll listen in on the session, it’ll go and retrieve your medical records, give recommendations, fill in prescriptions, automate a lot of the data entry pieces so that your physician can provide better health care and face to face.

So, making a lot of progress. I think the progress is going to be exponential. We’re going into the most productive decade ever in history. That’s coming. But we’re still getting off the blocks.

Tony Chidiak: Yeah, 100%. I would definitely agree, Brian, that we’re at our early stages. And I just want to even back up a little bit and, why AI and healthcare and life sciences?

And I know we’ve touched on a few use cases, and obviously there’s so many, and I think as time moves forward here, we’ll find, oh, I didn’t even think about that, at least from my perspective. But we’re talking about things like improved diagnostics like Brian hit on with the imaging, being able to be more accurate and quicker.

We think about all the times that we go and we get an MRI or x-ray or certain scans and we wait however so long to go through its process and spit back out to us. AI is going to be able to fast circuit all that. When we talk about personalization taking in the history of that patient and what they’ve gone through and utilizing AI to be able to predict some outcomes.

So maybe you’re getting ahead of the curve, getting on treatment plans that are what you need to do from the start. Another big one that’s probably really, really exciting is simply drug discovery and development. I think they stay, Brian, and fact, check me on this, but I think it’s 10 to 12 years the standard process for drug discovery and development. With AI, we’re talking 1 to 2 years to 3 years, and I think that number only shrinks the better the data and the more it gets proven. So, huge opportunity to get life saving drugs into the market by use of AI. I think another one that’s a headache for everybody is how these companies, whether it be a pharmaceutical company working with the insurance claims, but just simply workflow optimization. I think probably most people have either waited or have someone they’ve had to wait for a pre approval process. What goes into that process? It’s a lot of human factors and having AI to short circuit that process with the right data can drastically help different outcomes to really speed things up.

There’s a whole slew of use cases, but at the end of the day, I think it’s going to be revolutionary in how we interact with our day to day providers, and then what the pharmas are able to produce to really make this a better world and really, AI for good in healthcare.

Brian O’Mahony: And the peer providers are now starting to work together on some of those workflows to make things more efficient. At the end of the day contribute to better patient outcomes.

Tony Chidiak: I think the biggest takeaway NetApp captures all of this, no matter where your data is. Edge, core to cloud more unique in that perspective. We’re number one in data management, and we’re always exceeding performance in these workloads.

So it’s a very exciting time for NetApp. It’s fascinating to be part of the AI in healthcare and life sciences space through what it means to society, what it means to humans and improving lives everywhere. It’s neat to be on the data side of that to make all that happen.

Brian O’Mahony: Yeah. I’d close it out with what does NetApp bring to the table? We touch every step of the AI pipeline, from the ingest from the beginning, all the way through to the training, to collaborating with others, moving things to the cloud, all the way from delivering those models into production.

That can be extremely challenging. Only 13 percent of models actually make it into production and become productive. A large part of that is not having those capabilities that we just talked about. We help organizations gain value a lot quicker. The time to value, the time to production is drastically increased.

And also being able to monitor models in production which become stale very quickly and need to be retrained. That value all the way through those is really important and that’s what we shine.

Justin Parisi: So if we wanted to find more information about healthcare and NetApp and ONTAP, where would we do that?

Tony Chidiak: Yeah obviously online we have on our website full links and talks about different use cases. In addition to that, we have a team dedicated to healthcare and AI, starting with myself, along with a robust technical team.

And, I mentioned my contact information up at the top. That would be the best starting point and we’ll get you aligned with the appropriate resources.

Justin Parisi: We also have NetApp.com/industry/healthcare. That’s where you can find all sorts of information about how we improve the life of healthcare storage administrators out there. Well, Tony Brian, thanks so much for joining us today and telling us all about AI and healthcare and the current state of things as it stands today.

Tony Chidiak: Yeah. Thanks for having us.

Justin Parisi: All right. That music tells me it’s time to go. If you’d like to get in touch with us, send us an email to podcast@netapp.com or send us a tweet @NetApp. As always, if you’d like to subscribe, find us on iTunes, Spotify, Google Play, iHeartRadio, SoundCloud, Stitcher, or via techontappodcast.com if you liked the show today, leave us a review. On behalf of the entire Tech ONTAP podcast team, I’d like to thank Brian O’Mahony and Tony Chidiak for joining us today. As always, thanks for listening.

Podcast Intro/outro: [Outro]

 


Discover more from reviewer4you.com

Subscribe to get the latest posts to your email.

We will be happy to hear your thoughts

Leave a reply

0
Your Cart is empty!

It looks like you haven't added any items to your cart yet.

Browse Products
Powered by Caddy

Discover more from reviewer4you.com

Subscribe now to keep reading and get access to the full archive.

Continue reading