← All posts
Secure AI Adoption: Practical Implementation with Microsoft Purview and Defender

Secure AI Adoption: Practical Implementation with Microsoft Purview and Defender

2026-01-30 7 min read
ai-securitydata-protectionmicrosoft-purviewdefender-xdradvanced-huntingdlp

Microsoft's new Data Security Index 2026 dropped some numbers worth paying attention to: 32% of data security breaches involve generative AI tools. At the same time, 47% of organizations have implemented AI-specific controls, up 8% from 2025.

I see this gap daily with customers. AI tools are being adopted faster than security teams can establish monitoring. Let me share some concrete measures to close that gap.

TL;DR: The Three Priorities

  1. Consolidate tools → Unified visibility in Purview
  2. Monitor AI usage → DLP and Defender for Cloud Apps
  3. Use AI defensively → Security Copilot for automation

Detection: Find Unauthorized AI Usage with KQL

Query 1: Identify Traffic to Known AI Services

// Defender for Cloud Apps - Shadow AI discovery
CloudAppEvents
| where Timestamp > ago(7d)
| where Application has_any ("ChatGPT", "Claude", "Gemini", "Copilot", "Perplexity", "Midjourney", "DALL-E")
| summarize 
    RequestCount = count(),
    UniqueUsers = dcount(AccountObjectId),
    DataUploadedMB = sum(toreal(ReportId)) / 1048576 // Approximation
by Application, bin(Timestamp, 1d)
| order by RequestCount desc

Query 2: Find Sensitive Data Sent to AI Endpoints

// Data Security Events - DLP policy violations (Advanced Hunting)
DataSecurityEvents
| where Timestamp > ago(30d)
| where DlpPolicyMatchInfo contains "AI" or DlpPolicyMatchInfo contains "Generative"
| where DlpPolicyEnforcementMode in (2, 3, 4) // Warn, Warn+Bypass, Block
| extend SensitiveTypes = parse_json(SensitiveInfoTypeInfo)
| summarize 
    BlockedAttempts = countif(DlpPolicyEnforcementMode == 4),
    WarnedAttempts = countif(DlpPolicyEnforcementMode in (2, 3)),
    UniqueUsers = dcount(AccountUpn)
by AccountUpn, ApplicationNames
| where BlockedAttempts > 5
| order by BlockedAttempts desc

Note: DataSecurityEvents is in Preview and requires Microsoft Purview integration with Defender XDR.

Query 3: Anomaly Detection on Data Exfiltration

// Identify users with abnormal uploads to AI services
let baseline = CloudAppEvents
| where Timestamp between(ago(30d) .. ago(7d))
| where Application has_any ("ChatGPT", "Claude", "Gemini")
| summarize AvgDailyRequests = count() / 23 by AccountObjectId;
//
CloudAppEvents
| where Timestamp > ago(7d)
| where Application has_any ("ChatGPT", "Claude", "Gemini")
| summarize TodayRequests = count() by AccountObjectId, bin(Timestamp, 1d)
| join kind=inner baseline on AccountObjectId
| where TodayRequests > AvgDailyRequests * 3
| project AccountObjectId, TodayRequests, AvgDailyRequests, AnomalyRatio = TodayRequests / AvgDailyRequests

IR Playbook: Handling AI Data Leakage

Phase 1: Identification (0-30 min)

Step Action Tool
1.1 Verify alert. Is it actually sensitive data? Purview Activity Explorer
1.2 Identify user and context Defender XDR User Page
1.3 Map scope. What was shared, with which service? Cloud App Events

Phase 2: Containment (30-60 min)

# Suspend user session immediately
Revoke-MgUserSignInSession -UserId "user@domain.com"

# Block access to AI applications
# Done in Defender for Cloud Apps > App Governance

Manual checks:

  • Has the user copied data to a personal device?
  • Has the data already been processed by the AI service?
  • Which third parties potentially have access?

Phase 3: Eradication and Recovery

  1. Reassess user permissions: Principle of least privilege
  2. Update DLP policy: Block instead of warn
  3. Implement session controls: Conditional Access for AI apps

Phase 4: Lessons Learned

Document in incident report:

  • What data was exposed?
  • Which AI service?
  • Why did existing controls fail?
  • Recommended policy changes

Proactive Protection: Microsoft Purview DSPM

Data Security Posture Management (DSPM) is the key to proactive AI security. According to the report, over 80% of organizations are implementing DSPM strategies.

Getting Started:

  1. Enable continuous discovery

    • Purview > Data Security Posture Management > Get Started
    • Connect all data sources (SharePoint, OneDrive, Teams, SQL)
  2. Define sensitivity classification

    • Use built-in sensitive info types
    • Create custom classifiers for industry-specific data
  3. Set up automatic alerts

    • Alert on high-risk exposure
    • Integrate with Sentinel for SOAR response

Governance: Sanctioned vs. Unsanctioned AI Tools

Recommended Approach:

Allow (with monitoring):

  • Microsoft Copilot (integrated DLP)
  • Azure OpenAI (enterprise-controlled)
  • GitHub Copilot (with enterprise license)

Block or strictly monitor:

  • Public ChatGPT
  • Claude (unauthenticated)
  • Other shadow AI services

Conditional Access Policy Example:

Policy: Block-Unsanctioned-AI
Assignments:
  Users: All Users
  Cloud Apps: ChatGPT, Claude (public), Bard
  Conditions: Any device, any location
Grant: Block access
Session: N/A

Automation with Security Copilot

82% of organizations plan to use generative AI in security operations. Here are practical use cases:

  1. Incident triage: "Analyze this DLP event and rate severity"
  2. Policy recommendations: "Suggest DLP rules based on last month's blocks"
  3. Report generation: "Create executive summary of AI-related incidents"

What I'd Actually Prioritize

AI adoption isn't going anywhere. The key is building security into the AI strategy, not adding it later.

  1. Start with visibility. Deploy Defender for Cloud Apps and enable AI app discovery. You can't protect what you can't see.

  2. Graduated controls. Don't block everything. Allow sanctioned tools with DLP, block shadow AI.

  3. Automate response. Security Copilot and Sentinel playbooks let a small team cover more ground.


Connect on LinkedIn or see my CV.

About the Author

Trym Håkansson is Lead of Security Operations at Crayon, specializing in MDR, incident response, and Microsoft security platforms.