Back to Intelligence

HIMSS26: Forging the Alliance Between AI Governance and Cybersecurity Resilience

SA
Security Arsenal Team
March 9, 2026
5 min read

HIMSS26: Forging the Alliance Between AI Governance and Cybersecurity Resilience

The floor of the 2026 HIMSS Global Health Conference & Exhibition is buzzing with the latest in health tech, but amidst the flashy demos, a critical, sobering narrative is emerging. Industry leaders, including Phil Sobol of CereCore, are highlighting a fundamental shift in the healthcare security landscape: the undeniable convergence of AI governance and cybersecurity resilience.

For years, Artificial Intelligence was viewed as a futuristic add-on or a efficiency booster for Electronic Health Records (EHR) platforms like Epic, Meditech, and Oracle Health. Today, it is the backbone of modern care delivery. However, with this deep integration comes a new vector of risk that traditional security frameworks are ill-equipped to handle. The conversation at HIMSS26 isn't just about innovation anymore; it is about survival in an era where an AI failure can be as dangerous as a ransomware attack.

The Analysis: When AI Becomes a Security Liability

The convergence of AI governance and cybersecurity is not accidental; it is a necessity driven by the deployment of Generative AI (GenAI) and Large Language Models (LLMs) in clinical settings. When a major platform utilizes AI to assist with clinical documentation or patient triage, the "AI model" effectively becomes part of the critical infrastructure.

The Risks of Ungoverned AI The primary threat landscape has shifted from simple data breaches to sophisticated manipulations of logic:

  1. Data Poisoning and Model Drift: Attackers are no longer just stealing data; they are looking to influence the data sets that train healthcare models. If a threat actor can subtly alter the inputs of a medical AI, they can degrade the accuracy of diagnostics or patient risk scores over time.
  2. Prompt Injection Attacks: As healthcare providers deploy chatbots for patient interaction, malicious actors can use prompt injection to bypass safety guardrails, potentially exposing Protected Health Information (PHI) or generating harmful medical advice.
  3. Automated Social Engineering: Threat actors are using the same AI tools that protect hospitals to automate phishing attacks at scale. These attacks are hyper-personalized and incredibly difficult for standard email filters to catch, making the human element the weakest link.

Why Governance Must Meet Resilience AI governance traditionally focuses on ethics, bias, and compliance. Cybersecurity focuses on confidentiality, integrity, and availability. At HIMSS26, experts are arguing that these can no longer be siloed. A governed AI model that is secure against bias but vulnerable to a prompt injection is a liability. Conversely, a secure model that operates without ethical governance is a compliance disaster. True resilience requires a framework where security policies are baked into the AI development lifecycle (often referred to as MLSecOps) from the start, ensuring that algorithms are auditable, explainable, and hardened against adversarial attacks.

Executive Takeaways

For CISOs and CIOs navigating the expo floor and planning their 2026 roadmaps, the convergence of these fields demands immediate strategic attention:

  • Break Down Silos: AI governance cannot sit solely with the Data Science team. Security leaders must have a seat at the table to define threat models for every new AI deployment, whether it is a custom-built tool or a vendor feature in Epic or Oracle Health.
  • Inventory is Priority Zero: You cannot secure what you cannot see. Organizations are rapidly adopting "Shadow AI" tools without IT approval. An immediate inventory of all AI agents, plugins, and LLM access points within the network is essential to prevent data leakage.
  • Prepare for AI-Specific Regulation: As regulators catch up to the technology, frameworks focusing on algorithmic transparency and auditability will become law. Treating AI governance as a component of cybersecurity compliance positions organizations ahead of the curve rather than reacting to fines.

Mitigation Strategies

To address the convergence of AI and security, healthcare organizations must move beyond theoretical frameworks and implement technical controls.

1. Implement AI Usage Policies and Technical Blocks Establish clear acceptable use policies for Generative AI. Back these policies with technical controls that prevent unauthorized exfiltration of sensitive data to public AI models.

2. Audit for Unauthorized AI Tools Security teams should actively scan endpoints for unauthorized AI applications or browser extensions that employees might be using to process PHI.

You can use the following PowerShell script to audit Windows endpoints for common Generative AI applications that may pose a shadow IT risk:

Script / Code
# PowerShell Script to Audit for Common Generative AI Applications on Windows Endpoints
# Run this script as part of a regular SOC assessment or via SCCM/Intune

$aiKeywords = @("ChatGPT", "Copilot", "Claude", "Jasper", "Bing AI", "OpenAI", "Perplexity")
$foundApps = @()

# Check 64-bit and 32-bit registry uninstall keys
$registryPaths = @(
    "HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\*",
    "HKLM:\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall\*"
)

foreach ($path in $registryPaths) {
    $apps = Get-ItemProperty $path -ErrorAction SilentlyContinue
    
    foreach ($app in $apps) {
        if ($app.DisplayName) {
            foreach ($keyword in $aiKeywords) {
                if ($app.DisplayName -like "*$keyword*") {
                    $foundApps += [PSCustomObject]@{
                        ComputerName = $env:COMPUTERNAME
                        DisplayName  = $app.DisplayName
                        DisplayVersion = $app.DisplayVersion
                        Publisher    = $app.Publisher
                        InstallDate  = $app.InstallDate
                    }
                }
            }
        }
    }
}

# Output results
if ($foundApps.Count -gt 0) {
    Write-Warning "Potential Shadow AI applications detected:"
    $foundApps | Format-Table -AutoSize
} else {
    Write-Host "No common Generative AI applications found in registry." -ForegroundColor Green
}


**3. Monitor Network Traffic for AI Services**

Utilize network monitoring tools to detect traffic destined for known AI API endpoints (e.g., OpenAI, Anthropic) that are not sanctioned by the organization. This helps identify data leakage risks early.

Related Resources

Security Arsenal Healthcare Cybersecurity AlertMonitor Platform Book a SOC Assessment healthcare Intel Hub

healthcarehipaaransomwarehimss26ai-governancehealthcare-cybersecurityehr-securityrisk-management

Is your security operations ready?

Get a free SOC assessment or see how AlertMonitor cuts through alert noise with automated triage.