Caution! This content was not written by AI, an actual human being wrote this.
How do we ensure that AI is set up appropriately and behaving responsibly in our healthcare organizations? Typically, systems, devices, and applications send data to logs; traditionally, that data is accumulated and reported monthly. Errors are corrected, abuses addressed, and problems fixed, but this is typically in retrospect. With the immediacy of changing environments in healthcare IT, the introduction of new data daily, new users, network changes, and so on, it would be ideal to monitor the activities associated with AI in real time in order to ensure adherence to policies and compliance. How do we do this? Answer: automated risk management tools.
What are we looking for?
The first concern is what new data is being fed to our AI applications to allow them to learn and improve, whether in clinical or administrative settings across the healthcare enterprise. Also, what data sets, applications, APIs, file shares, and so on do our AI applications have access to? Hopefully, these parameters will be specified and determined initially as part of an AI lifecycle implementation. Policies should be set, outlining the data allowed to be accessed and how this is specifically handled if sensitive data/information is associated with learning. Subsequently, this all needs to be monitored and reported for compliance reasons.
Who Has Access and Activity
On the other side of the coin, monitoring user and AI activity is also critical. If the AI application has autonomous ability through routines, associations, and algorithms, all of these need to be monitored for accuracy and compliance. The ability to monitor aberrant or nefarious end-user activity is also a must. We want to know if a user is continuously doing queries outside their normal job function. For instance, if a user is trying to continuously extract personal health information on patients when that is not their job function, we would want to be able to flag this behavior in real time. We would also want to keep abreast of application vulnerabilities to avert an infiltration from a rogue user/hacker who may have gotten access to the network. So clearly, there is a role-based approach to AI that needs monitoring, especially if a wide variety of roles have access to the AI system. Particularly as an enterprise, AI applications may be used by clinical, administrative, janitorial, supply chain, etc.
Case for Automated Risk Management Tools
Perfect scenario: We have an automated risk management tool that:
1. Automates the mapping of the healthcare entities’ cybersecurity and risk management program into one centralized “pane of glass” tool available to all stakeholders in the enterprise. The Board has access; the Executives have access, management has access, and the CIO and CISO and their staff have access.
2. Monitors change (change management) in the enterprise in real time, thus keeping all stakeholders on the same page daily.
3. Can monitor AI applications in real time for risk, change, life cycle activities, and usage including aberrations of users and the AI application itself.
This ability exists today in evolving risk management tools. The sooner organizations begin to adopt these technologies, the faster they will be able to position themselves from being “fire fighting” or reactive healthcare IT organizations, to proactively managing their environments and taking charge of their IT futures and responsibly adopting AI technology.
For more information about CSI Companies’ Security and AI Readiness Programs, visit our website and speak with one of our experts today!
About the Author
Paul J. Caracciolo is a distinguished graduate from the University of New York at Potsdam, holding a bachelor’s degree in Earth Sciences with a minor in Computer Science. With a dedicated focus on healthcare computing, he has consistently leveraged technology to enhance the standards of patient care throughout his professional journey. Paul’s impressive track record includes executive roles such as Chief Technology Officer (CTO) at Stanford’s hospitals and clinics, Chief Information Security Officer (CISO) at Duke University Health, and CTO/CISO at CommonSpirit Health, showcasing his expertise and leadership in the healthcare technology sector.
The post Keeping AI Applications Honest appeared first on CSI Companies.