Close this search box.

Healthcare Data and AI in 2024: Cybersecurity Expert Advice

Don Silva Sr., CISO, shares his take on healthcare data and AI in 2024. Examine privacy and patient data security in the age of artificial intelligence.
A Healthcare Administrator With Long Hair Sits At A Desktop Computer Wearing A Headset And Smiling. Illustrating The Topic Of Healthcare Data And Ai Applications.

As someone who’s spent decades in the tech and information security field, I’ve paid close attention the development and, recently, the rapid adoption of artificial intelligence (AI) tools.

In the healthcare space, both patients and clinicians have tended toward a cautious approach, raising real concerns while acknowledging the potential of new tech. And for good reason. The use of artificial intelligence in medical technology, patient care, clinical documentation, and research data analysis has shown incredible potential. But it also poses new and evolving risks.

Here’s what providers and practices need to know about healthcare data and AI, today.

Healthcare Data and AI Technology Today

Let’s loosely define artificial intelligence as technology designed to emulate human information and language processing.

In the past few years, generative AI technology has seen rapid expansion in almost every industry, from art and music to science and, of course, health.

Though gen AI may be the harbinger of the next digital renaissance, the healthcare sector is no stranger to the tech. Many modern practices are already using artificial intelligence and machine learning in several key ways, including:

Business Intelligence Tools

In addition to large quantities of patient data, private practices also store critical data relating to business operations and productivity. But these data sets aren’t always put to use, simply because it takes time to process information and glean valuable insights. Business Intelligence software leverages aggregated data across operational, clinical, and financial domains of your practice can help leaders make data-informed business decisions.

Predictive Analytics

Predictive algorithms are helpful when it comes to financial analysis (i.e. revenue forecasting), but the practical applications don’t end there. Predictive analytics are also leveraged by organizations to inform clinical decision-making and risk scoring at the individual and population level.

Marketing and Creative Tools

Many aspects of running a successful practice don’t involve accessing patients’ health information at all. For example, AI helps marketing teams write social media captions, outline blog posts, edit video, and much more.

Artificial Intelligence and Health IT: Benefits vs. Concerns

Like any new tool or technology, machine learning and generative AI systems can offer tremendous benefits to the healthcare industry. Still, the “unknowns” and potential for exploitation still raise concerns.

It’s not an exhaustive list by any means, but I’ll share some of the biggest pros and cons to look out for:

Benefits of AI in Healthcare

  • Supporting data-driven decision-making. Often, it’s not a lack of data that’s the problem. It’s a lack of the time and resources needed to put that data to work. One of the greatest advantages of AI is its ability to quickly crunch vast amounts of information and apply that “intelligence” to new and diverse problems.
  • Improving health outcomes. Analyzing large sets of past clinical data can help providers choose the right treatment plans and improve outcomes. Healthcare applications of AI can even support image analysis, patient monitoring, and medical device automation—and the possibilities only continue to grow.
  • Streamlining administrative workflows. Any time you can simplify repetitive, manual processes is a win for productivity. Use AI to automate the simple stuff and catch errors, while you focus on tasks that require a human touch. (My PT and OT readers can take that wisdom literally!)

Concerns About AI in Healthcare

  • Healthcare Data and AI Security. This is the big one. Like any new technology, AI can be used for good—but it can also be used against you. Bad actors can use AI to create more convincing phishing schemes, for example. And introducing brand-new tools to your tech stack can introduce its own security risks and privacy concerns.
  • Replacing Humans. Many worry that AI is coming for their jobs. While fully AI-led treatment remains in the realm of science fiction, healthcare roles may indeed change as AI becomes more sophisticated. My colleague, Allison Jones, recently spoke about this topic with Drew Contreras, APTA Vice President of Clinical Integration and Innovation. Go take a listen!
  • Return on Investment. All new tools and technologies come with a price. But they don’t always live up to their promise. Any time you’re deciding whether to invest in new technology, balance costs against the (proven) benefits.

Get the Newsletter!

Key rehab therapy insights and resources. Twice a month. Unsubscribe any time.

This field is for validation purposes and should be left unchanged.

Tip #1: Identify Privacy Risks in New AI Tools

You can only protect health data if you can track itwhere is it, who is using it, and why are they using the data? In fact, protecting and monitoring health data is a central requirement of the HIPAA Privacy and Security Rules. With that in mind, it’s important to evaluate your AI tools before introducing them to your IT ecosystem, especially when it involves electronically stored protected health information (ePHI).

Example: Open vs. Closed AI Models

There’s an ongoing debate about the public availability of AI technology. Open (or open-source) models promote sharing code freely or widely, to encourage transparency, equitable access, and innovation.

Closed models, on the other hand, are exactly what they sound like. Developers of closed models limit and protect access to the AI system’s code and components, often citing security risks or commercial interests.

But here’s what you really need to remember, from a healthcare cybersecurity perspective:

Healthcare professionals work with sensitive data, and have a responsibility to maintain privacy of patient information. If you plan to integrate AI into your practice, mitigate risks by steering clear of open models. I’ll put it this way: If you’re not using closed systems, data entered into that system cannot be recovered. Once your data is out in the wild, you can’t get it back. And you run the risk of making yourself—and your patients—the target of more spam, more attacks, and more privacy and security issues.

Undraw Organizing Projects Re 9P1K

Tip #2: Take a Proactive Approach to Data Protection

Taking a proactive approach can be better than relying on regulatory bodies to weigh in, since they typically can’t keep up with the rapidly evolving tech landscape. Of course you should pay attention to what’s happening at the federal level, like the recent establishment of a new White House AI Council. But remember that the responsibilities of protecting patient data (and the consequences in the event of a data breach) ultimately fall on you.

Asking the Right Questions

A great first step before evaluating any new tech is to create practice-wide security assessment policies. Ask:

  • What is our Acceptable Use Policy for AI?
  • How are we proactively assessing the security of new AI tools?
  • When did we last update our Privacy Policies and MSA (Master Service Agreements), and do they include the usage of AI?

Tip #3: Flip the Script

Within the strict regulation of patient health records and sensitive information, the risks of artificial intelligence might seem like a pandora’s box. Better to leave it closed, and let someone else find out the hard way.

That said, in some cases AI can actually help improve your cybersecurity measures and protect patient data security. AI powered tools could make it easier to detect phishing scams, authenticate and manage user access to electronic health records, identify and mitigate vulnerabilities, or even create more valuable and engaging security awareness trainings. There’s no wrong time to start improving your practice’s security posture!

The Future of Healthcare Security and Privacy

Healthcare systems and AI can (and do) go hand-in-hand. But we’re on the precipice of a new age of AI development.

Today, my advice is to be cautiously optimistic. Don’t rush in without assessing the risk, but still remain open to the possibilities. As I’ve shared before, a strong security posture can go a long way in helping your practice reap the benefits of modern technology while minimizing the inevitable risks.

As a parting thought: Consider how your health IT business associates can partner with you to protect patient privacy and ensure data protection. The game is changing, and you want strong players on your team.

Here at Raintree Systems, we help physical therapy, occupational therapy, speech-language pathology and multi-disciplinary practices grow and succeed with scalable and robust software solutions. Raintree offers the only ONC-certified EHR system designed specifically for rehab therapy. Want to learn more? Schedule a demo and learn why high-growth PT, OT, SLP, and multi-disciplinary practices choose Raintree.

A Photo Of Don Silva Sr.

Don Silva Sr. is Raintree Systems’ Chief Information Security Officer and VP System Ops & Infrastructure. As a senior leader with over 20 years experience leading Global Technology Teams across a variety of industries, he has helped companies grow, mature and align Security & Engineering Teams with business goals. Read Don’s full bio >

Frequently Asked Questions

Did you know healthcare systems rank as one of the top ten biggest targets for cyber attacks? Maybe you do! Major breaches make headlines after all. But practices of all sizes should be on alert. 

The Department of Health and Human Services Office for Civil Rights (OCR) shares the number of reported health data breaches annually. The most recent report shows a 5.4% increase in breaches affecting less than 500 individuals between 2017 and 2021. The grand total of these "small" breaches in 2021 was over 100 times more than breaches affecting larger organizations.

Some of the biggest concerns when it comes to implementing AI solutions in healthcare have to do with regulatory compliance: maintaining data privacy and confidentiality, managing cybersecurity threats, and the complexity of integrating AI systems with existing healthcare IT infrastructure.

Additionally, there are worries about the reliability, accuracy, and transparency of AI systems, and the potential for algorithmic bias widening health equity gaps. Some speculate that healthcare professionals will become increasingly dependent on AI, while others see opportunities to streamline burdensome administrative tasks.

Integrating AI solutions in healthcare poses a complex challenge. For one thing, newly-introduced technology must be designed to securely communicate with existing healthcare IT infrastructures. Other barriers include privacy concerns, lack of trust in AI and machine learning models, or potentially a lack of awareness of the benefits and applications of the technology.

The short answer is no; AI will not replace PT, OT, and SLP jobs. However, roles will likely evolve as AI advances and simplifies some traditional workflows. The most successful will be those who can adapt and innovate within the new healthcare landscape.

Table of Contents

Rehab Therapy Insights in Your Inbox

This field is for validation purposes and should be left unchanged.

Get Rehab Therapy Insights in Your Inbox

This field is for validation purposes and should be left unchanged.

Blogs are created for educational and informational purposes only.  The information provided does not constitute or, is not intended to constitute, legal or medical advice. When you read this information, visit our website, or access our materials, you are not forming an attorney-client, provider-patient, or other relationship with us.