last sync: 2024-Sep-19 17:51:32 UTC

Microsoft Managed Control 1115 - Audit Review, Analysis, And Reporting | Regulatory Compliance - Audit and Accountability

Azure BuiltIn Policy definition

Source Azure Portal
Display name Microsoft Managed Control 1115 - Audit Review, Analysis, And Reporting
Id 0b653845-2ad9-4e09-a4f3-5a7c1d78353d
Version 1.0.0
Details on versioning
Versioning Versions supported for Versioning: 0
Built-in Versioning [Preview]
Category Regulatory Compliance
Microsoft Learn
Description Microsoft implements this Audit and Accountability control
Additional metadata Name/Id: ACF1115 / Microsoft Managed Control 1115
Category: Audit and Accountability
Title: Audit Review, Analysis, And Reporting - Review
Ownership: Customer, Microsoft
Description: The organization: Reviews and analyzes information system audit records On a real-time basis for indications of Indications of compromise, events that meet a pattern of a known attack methodology; and
Requirements: Due to the size and complexity of the Azure environment, Azure utilizes log event forwarding tools to record events across all Azure assets and utilizes monitoring tools to automatically correlate and analyze the events gathered by each logging tool. Log reviews cannot be conducted manually in the Azure environment due to the high volume of events. Instead, Azure implements automated methods to perform review, analysis, and reporting of logs. Azure implements tooling such as Azure Security Monitoring (ASM) and SCUBA to directly alert the appropriate personnel of security-relevant events in a variety of ways, including Service 360 (S360) notifications, Incident Management (IcM) tickets, and work items. These tools utilize audit policies and detections that report events to the Microsoft Operations Center (MOC), Security Response Team, and service teams as appropriate. The policies are tuned to alert on events of immediate concern. There are multiple detection authoring teams across Azure. This includes data scientists working on Azure Security Center (ASC) and the Microsoft Threat Intelligence Center (MSTIC) who write detections for both external customer use via ASC and enable coverage of applicable detections for internal Azure services via the logging and monitoring pipeline. Examples of the detections are documented in the help topic for ASC detection capabilities at the link below. This includes integrated threat intelligence which looks for known bad actors by leveraging global threat intelligence from Microsoft products and services, the Microsoft Digital Crimes Unit (DCU), the Microsoft Security Response Center (MSRC), and external feeds, behavioral analytics, which applies known patterns to discover malicious behavior, and anomaly detection, which uses statistical profiling to build a historical baseline and alerts on deviations from established baselines that conform to a potential attack vector. Example of detections running for internal Azure services include suspicious process execution, malicious PowerShell scripts, lateral movement and internal reconnaissance, and hidden malware and exploitation attempts. These detections are routed to MSRC for triage and investigation. The ASM team has atomic near-real-time monitors for unexpected asset access, malware, and audit processing failures (like clearing the security event log, system time changes). The alerts are autorouted to services for review, except for identified high value assets (HVA) where the alerts are centrally triaged by the Security Response Team. Once the raw logs are automatically correlated and processed, the appropriate teams review and analyze alerts generated by the detections and automated review of audit records in real time, customer request or escalation, or any other functionality impacting the alert in production. Groups of these correlated events that meet a pattern of a known attack methodology are collected and delivered to appropriate personnel via IcM, email, or work item. Personnel correlate alerts and append them to tickets for review and analysis, and if necessary for future authoring and refinement of new or existing detections. The alerting system provides a response capability twenty-four (24) hours a day, seven (7) days a week. Troubleshooting Guides (TSGs) applied to work tickets provide instructions for the escalation of certain events to response personnel.
Mode Indexed
Type Static
Preview False
Deprecated False
Effect Fixed
audit
RBAC role(s) none
Rule aliases none
Rule resource types IF (2)
Microsoft.Resources/subscriptions
Microsoft.Resources/subscriptions/resourceGroups
Compliance Not a Compliance control
Initiatives usage none
History none
JSON compare n/a
JSON
api-version=2021-06-01
EPAC