last sync: 2024-Sep-19 17:51:32 UTC

Microsoft Managed Control 1622 - Boundary Protection | Regulatory Compliance - System and Communications Protection

Azure BuiltIn Policy definition

Source Azure Portal
Display name Microsoft Managed Control 1622 - Boundary Protection
Id ecf56554-164d-499a-8d00-206b07c27bed
Version 1.0.0
Details on versioning
Versioning Versions supported for Versioning: 0
Built-in Versioning [Preview]
Category Regulatory Compliance
Microsoft Learn
Description Microsoft implements this System and Communications Protection control
Additional metadata Name/Id: ACF1622 / Microsoft Managed Control 1622
Category: System and Communications Protection
Title: Boundary Protection - Monitor And Control Communications
Ownership: Customer
Description: The information system: Monitors and controls communications at the external boundary of the system and at key internal boundaries within the system;
Requirements: Azure implements boundary protection through a defense-in-depth strategy. As a cloud service comprised of numerous service teams and customers, logical isolation and segmentation are critical to the secure operations of Azure. The strategy includes network segmentation through VLAN and Network Security Group (NSG) segmentation, ACL restrictions, and encrypted communications. Internally, at the network level, Azure implements Jumpboxes, Debug Servers, Hop Boxes, and a VPN to restrict access to the Azure environment. To access the environment, users must first log on to the Jumpboxes, Debug Servers, Hop Boxes, or VPN using multifactor authentication. Azure is logically segmented into the management plane, which is heavily restricted to only key service teams necessary to administer underlying networking and foundational services, and the customer plane, upon which all Azure service teams and external customers reside and operate. Internal Azure service teams utilize the same logical isolation techniques that customers use to ensure their deployments are secure. The logical isolation of internal and customer infrastructure in a hyperscale cloud is fundamental to maintaining security. The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure Virtual Networks (VNets) ensure that all private network traffic is logically isolated from traffic belonging to other internal and external networks. Virtual Machines (VMs) in one VNet cannot communicate directly with VMs in a different VNet even if both VNets are created by the same user. Networking isolation ensures that communication between VMs remains private within a VNet. VNets provide isolation of network traffic between tenants as part of their fundamental design. A subscription can contain multiple logically isolated private networks, and include firewall, load balancing, and network address translation. Each VNet is isolated from other VNets by default. Multiple deployments inside the same subscription can be placed on the same VNet, and then communicate with each other through private IP addresses. Network access to VMs is limited by packet filtering at the network edge, at load balancers, and at the Host OS level. Each service team additionally configures their host firewalls to further limit connectivity, specifying for each listening port whether connections are accepted from the external networks or only from role instances within the same cloud service or VNet. Azure provides network isolation for each deployment and enforces the following rules: * Traffic between VMs always traverses through trusted packet filters. * Protocols such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and other OSI Layer-2 traffic from a VM are controlled using rate-limiting and anti-spoofing protection. * VMs cannot capture any traffic on the network that is not intended for them. * VMs cannot send traffic to Azure private interfaces and infrastructure services, or to other user's VMs. VMs can only communicate with other VMs owned or controlled by the same team and with Azure service endpoints. * When users put VMs on a VNet, those VMs get their own address spaces that are invisible, and hence, not reachable from VMs outside of a deployment or virtual network (unless configured to be visible via IP addresses). These environments are open only through the ports that users specify for access; if the VM is defined to have an external IP address, then all ports are open for external access. Azure’s hyperscale network is designed to provide uniform high capacity between servers, performance isolation between services, and Ethernet Layer-2 semantics. Azure uses a number of networking implementations to achieve these goals: flat addressing to allow service instances to be placed anywhere in the network; load balancing to spread traffic uniformly across network paths; and end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch – a Virtual Layer 2 (VL2) – and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it is not possible that the traffic of one service could be affected by the traffic of any other service, as if each service were connected by a separate physical switch. The Azure network uses two different IP-address families: * The customer address (CA) is the defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, as well as forward packets encapsulated with CAs along shortest paths. * The provider address (PA) is the Azure-assigned internal Fabric address that is not visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the external network to an asset; all traffic from external networks must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses RFC 1918 address space or private address space – the provider addresses (PAs) – that is not externally routable. The translation from the PA to the CA is performed at the SLBs. CAs - that are externally routable - are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their servers’ locations change due to virtual-machine migration or reprovisioning. Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory system’s resolution service to learn the actual location of the destination and then tunnels the original packet there. Azure assigns servers IP addresses that act as names alone, with no topological significance. Azure’s VL2 addressing scheme separates these server names (PAs) from their locations (CAs). The crux of offering Layer-2 semantics is having servers believe they share a single large IP subnet – i.e., the entire PA space – with other servers in the same service, while eliminating the Address Resolution Protocol (ARP) and Dynamic Host Configuration Protocol (DHCP) scaling bottlenecks that plague large Ethernet deployments. A server cannot send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can enforce fine-grained isolation policies. For example, it could enforce the policy that only servers belonging to the same service can communicate with each other. To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (i.e., the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToR’s CA, decapsulated again, and finally sent to the destination. For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When a user places an external IP on their VNet gateway, traffic from the external network that is destined for that IP address will be routed to an Edge Router. Azure then uses Border Gateway Protocol (BGP) to share routing details with the external network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of the user's external network. A cryptographically protected IPsec/IKE tunnel is established between Azure and user's internal network (e.g., via Azure VPN Gateway or Azure ExpressRoute Private Peering), enabling the VM to connect securely to the user's resources as though it was directly on that network. At the Edge Router or the MSEE Router, the packet is encapsulated using Generic Routing Encapsulation (GRE). This encapsulation uses a unique identifier specific to the VNet destination and the destination address, which is used to appropriately route the traffic to the identified VNet. Upon reaching the VNet Gateway, which is a special VNet used only to accept traffic from outside of an Azure VNet, the encapsulation is verified by the Azure network fabric to ensure: * the endpoint receiving the packet is a match to the unique VNet ID used to route the data, and * the destination address requested exists in this VNet. Once verified, the packet is routed as internal traffic from the VNet Gateway to the final requested destination address within the VNet. This approach ensures that traffic from external networks travels only to the Azure VNet for which it is destined, enforcing isolation. Internal traffic also uses GRE encapsulation/tunneling. When two resources in an Azure VNet attempt to establish communications between each other, the Azure network fabric reaches out to the Azure VNet routing directory service that is part of the Azure network fabric. The directory services use the customer address (CA) and the requested destination address to determine the provider address (PA). This information, including the VNet identifier, CA, and PA, is then used to encapsulate the traffic with GRE. The Azure network uses this information to properly route the encapsulated data to the appropriate Azure host using the PA. The encapsulation is reviewed by the Azure network fabric to confirm: * the PA is a match, * the CA is located at this PA, and * the VNet identifier is a match. Once all three are verified, the encapsulation is removed and routed to the CA as normal traffic (e.g., to a VM endpoint). This approach provides VNet isolation assurance based on correct traffic routing between cloud services. Azure VNets implement several mechanisms to ensure secure traffic between tenants. These mechanisms align to existing industry standards and security practices, and prevent well-known attack vectors including: * Prevent IP address spoofing – Whenever encapsulated traffic is transmitted by a VNet, the service reverifies the information on the receiving end of the transmission. The traffic is looked up and encapsulated independently at the start of the transmission, as well as reverified at the receiving endpoint to ensure the transmission was performed appropriately. This verification is done with an internal VNet feature called SpoofGuard, which verifies that the source and destination are valid and allowed to communicate, thereby preventing mismatches in expected encapsulation patterns that might otherwise permit spoofing. The GRE encapsulation processes prevent spoofing as any GRE encapsulation and encryption not done by the Azure network fabric is treated as dropped traffic. * Provide network segmentation across service teams with overlapping network spaces – Azure VNet’s implementation relies on established tunneling standards such as the GRE, which in turn allows the use of specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that a service team is always operating within their unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that has not been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded. * Prevent traffic from crossing between VNets – Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users do not have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Consequently, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic. In addition to these key protections, all unexpected traffic originating from external networks is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand. Moreover, the Azure network fabric blocks traffic from any IPs originating in the Azure network fabric space that are spoofed. The Azure network fabric uses GRE and Virtual Extensible LAN (VXLAN) to validate that all allowed traffic is Azure-controlled traffic and all non-Azure GRE traffic is blocked. By using GRE tunnels and VXLAN to segment traffic using unique keys, Azure meets RFC 3809 and RFC 4110. When using Azure VPN Gateway in combination with ExpressRoute, Azure meets RFC 4111 and RFC 4364. With a comprehensive approach for isolation encompassing external and internal network traffic, Azure VNets provide customer with assurance that Azure successfully routes traffic between VNets, allows proper network segmentation for tenants with overlapping address spaces, and prevents IP address spoofing. Service teams are also able to utilize Azure services to further isolate and protect their resources. Using Network Security Groups (NSGs), a feature of Azure Virtual Network, service teams can filter traffic by source and destination IP address, port, and protocol via multiple inbound and outbound security rules – essentially acting as a distributed virtual firewall and IP-based network Access Control List (ACL). Service teams can apply an NSG to each NIC in a Virtual Machine, apply an NSG to the subnet that a NIC, or another Azure resource, is connected to, and directly to Virtual Machine Scale Sets, allowing finer control over the service team infrastructure. At the infrastructure layer, Azure implements a Hypervisor firewall to protect all tenants running on top of the Hypervisor within virtual machines from unauthorized access. This Hypervisor firewall is distributed as part of the NSG rules deployed to the Host, implemented in the Hypervisor, and configured by the Fabric Controller agent. The Host OS instances utilize the built-in Windows Firewall to implement fine-grained ACLs at a greater granularity than router ACLs and are maintained by the same software that provisions tenants, so they are never out of date. They are applied using the Machine Configuration File (MCF) to Windows Firewall. At the top of the operating system stack is the Guest OS, which service teams utilize as their operating system. By default, this layer does not allow any inbound communication to cloud service or virtual network, essentially making it part of a private network.
Mode Indexed
Type Static
Preview False
Deprecated False
Effect Fixed
audit
RBAC role(s) none
Rule aliases none
Rule resource types IF (2)
Microsoft.Resources/subscriptions
Microsoft.Resources/subscriptions/resourceGroups
Compliance Not a Compliance control
Initiatives usage none
History none
JSON compare n/a
JSON
api-version=2021-06-01
EPAC