Leveraging AI and machine learning to protect and validate relevant patient data
Photo: Reza Estakhrian/Getty Images
An ever-growing amount of healthcare and life sciences data is being generated from a widening variety of sources, such as medical practices, laboratories, pharmaceutical companies, payers, electronic health records and imaging systems, and the proliferation of IoT devices. Failing to properly control all that data is creating what Michael K. Giannopoulos calls “data sludge.”
“We’re quickly approaching the point where the amount of data and the number of ingestion points we have is becoming unmanageable by a human or a series of humans,” said Giannopoulos, the Healthcare and Life Sciences Chief Information Security Officer, Chief Technology Officer for Dell Technologies. The result, he said, are “amorphous blobs of data” in need of sorting for relevance.
Artificial intelligence and machine learning technologies are well poised to help healthcare organizations gain and keep control over all this data. These advanced tools do not just synthesize data sets to glean actionable insights; they can also rapidly parse, prioritize and properly protect patient data.
“We are not replacing human beings in any way, shape or form,” Giannopoulos, who is also Dell’s Federal Healthcare Director, emphasized. Instead, he said, these tools will augment human capacity and improve care delivery if models are developed with input from clinical stakeholders, not just data scientists.
AI can help healthcare IT teams categorize data and apply the most cost-effective security guidelines based on classifications, rather than apply the same protections regardless of the data’s relevance. For instance, electronic medical record and imaging data would fall under Tier 1 and require a more robust cybersecurity infrastructure than Tier 2-level user access data logs.
Giannopoulos stressed that this lower-cost approach still means all data remains secure. “We’re not dropping the level of security around different tiers of data, but we’re rationalizing between data levels and their impact on operations and RTO [recovery time objectives] scenarios, which can be less expensive,” he said.
Currently, generative AI like OpenAI’s ChatGPT is receiving attention for its potential to distill massive amounts of data into digestible bits. Giannopoulos believes that these large language models can eventually help healthcare organizations determine data protection levels and help prevent ransomware, data exfiltration and other cyber threats.
He envisions health systems developing private generative AI systems using internal data to implement and automate the right security guidelines. Current publicly available products like ChatGPT that rely on the broader internet have been known to return inaccurate responses and even, as Giannopoulos puts it, to “hallucinate” to the point of constructing patient studies and results that never took place.
“Public, general-search generative AI will be very different than what’s going to be developed by data scientists and technology professionals for healthcare as part of care delivery,” he noted.
Radiology, with a singular focus on discovering anomalies in images, is now leading the way in AI-assisted scans. Giannopoulos, who has spent his entire career in healthcare, believes digital pathology will be next to embrace AI because it’s also image-based, which makes it easier to build rule sets that flag abnormalities.
He offers one key piece of advice to any health systems who have yet to incorporate these new technologies, but who are interested in the possibilities of leveraging AI and advance analytics to protect and validate patient data and reduce “data sludge”: “Know your data and remember… Your North Star is always the patient.”