Updates from: 03/06/2021 04:08:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/relyingparty.md
The following example shows a relying party with [UserInfo endpoint](userinfo-en
## DefaultUserJourney
-The `DefaultUserJourney` element specifies a reference to the identifier of the user journey that is usually defined in the Base or Extensions policy. The following examples show the sign-up or sign-in user journey specified in the **RelyingParty** element:
+The `DefaultUserJourney` element specifies a reference to the identifier of the user journey that is defined in the Base or Extensions policy. The following examples show the sign-up or sign-in user journey specified in the **RelyingParty** element:
*B2C_1A_signup_signin* policy:
The **Protocol** element contains the following attribute:
| | -- | -- | | Name | Yes | The name of a valid protocol supported by Azure AD B2C that is used as part of the technical profile. Possible values: `OpenIdConnect` or `SAML2`. The `OpenIdConnect` value represents the OpenID Connect 1.0 protocol standard as per OpenID foundation specification. The `SAML2` represents the SAML 2.0 protocol standard as per OASIS specification. |
+### Metadata
+
+When the protocol is `SAML`, a metadata element contains the following elements. For more information, see [Options for registering a SAML application in Azure AD B2C](saml-service-provider-options.md).
+
+| Attribute | Required | Description |
+| | -- | -- |
+| IdpInitiatedProfileEnabled | No | Indicates whether the IDP initiated flow is supported. Possible values: `true` or `false` (default). |
+| XmlSignatureAlgorithm | No | The method that Azure AD B2C uses to sign the SAML Response. Possible values: `Sha256`, `Sha384`, `Sha512`, or `Sha1`. Make sure you configure the signature algorithm on both sides with same value. Use only the algorithm that your certificate supports. To configure the SAML Assertion, see [SAML issuer technical profile metadata](saml-issuer-technical-profile.md#metadata). |
+| DataEncryptionMethod | No | Indicates the method that Azure AD B2C uses to encrypt the data, using Advanced Encryption Standard (AES) algorithm. The metadata controls the value of the `<EncryptedData>` element in the SAML response. Possible values: `Aes256` (default), `Aes192`, `Sha512`, or ` Aes128`. |
+| KeyEncryptionMethod| No | Indicates the method that Azure AD B2C uses to encrypt the copy of the key that was used to encrypt the data. The metadata controls the value of the `<EncryptedKey>` element in the SAML response. Possible values: ` Rsa15` (default) - RSA Public Key Cryptography Standard (PKCS) Version 1.5 algorithm, ` RsaOaep` - RSA Optimal Asymmetric Encryption Padding (OAEP) encryption algorithm. |
+| UseDetachedKeys | No | Possible values: `true`, or `false` (default). When the value is set to `true`, Azure AD B2C changes the format of the encrypted assertions. Using detached keys adds the encrypted assertion as a child of the EncrytedAssertion as opposed to the EncryptedData. |
+| WantsSignedResponses| No | Indicates whether Azure AD B2C signs the `Response` section of the SAML response. Possible values: `true` (default) or `false`. |
+| RemoveMillisecondsFromDateTime| No | Indicates whether the milliseconds will be removed from datetime values within the SAML response (these include IssueInstant, NotBefore, NotOnOrAfter, and AuthnInstant). Possible values: `false` (default) or `true`. |
++ ### OutputClaims The **OutputClaims** element contains the following element:
The **OutputClaim** element contains the following attributes:
### SubjectNamingInfo With the **SubjectNameingInfo** element, you control the value of the token subject:+ - **JWT token** - the `sub` claim. This is a principal about which the token asserts information, such as the user of an application. This value is immutable and cannot be reassigned or reused. It can be used to perform safe authorization checks, such as when the token is used to access a resource. By default, the subject claim is populated with the object ID of the user in the directory. For more information, see [Token, session and single sign-on configuration](session-behavior.md).-- **SAML token** - the `<Subject><NameID>` element which identifies the subject element. The NameId format can be modified.
+- **SAML token** - the `<Subject><NameID>` element, which identifies the subject element. The NameId format can be modified.
The **SubjectNamingInfo** element contains the following attribute:
active-directory-b2c Saml Service Provider Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider-options.md
Previously updated : 03/03/2021 Last updated : 03/04/2021
This article describes the configuration options that are available when connect
## Encrypted SAML assertions
-When your application expects SAML assertions to be in an encrypted format, need to make sure that encryption is enabled in the Azure AD B2C policy.
+When your application expects SAML assertions to be in an encrypted format, you need to make sure that encryption is enabled in the Azure AD B2C policy.
Azure AD B2C uses the service provider's public key certificate to encrypt the SAML assertion. The public key must exist in the SAML application's metadata endpoint with the KeyDescriptor 'use' set to 'Encryption', as shown in the following example:
To enable Azure AD B2C to send encrypted assertions, set the **WantsEncryptedAss
</RelyingParty> ```
+### Encryption method
+
+To configure the encryption method used to encrypt the SAML assertion data, set the `DataEncryptionMethod` metadata key within the relying party. Possible values are `Aes256` (default), `Aes192`, `Sha512`, or `Aes128`. The metadata controls the value of the `<EncryptedData>` element in the SAML response.
+
+To configure the encryption method used to encrypt the copy of the key, that was used to encrypt the SAML assertion data, set the `KeyEncryptionMethod` metadata key within the relying party. Possible values are `Rsa15` (default) - RSA Public Key Cryptography Standard (PKCS) Version 1.5 algorithm, and `RsaOaep` - RSA Optimal Asymmetric Encryption Padding (OAEP) encryption algorithm. The metadata controls the value of the `<EncryptedKey>` element in the SAML response.
+
+The following example shows the `EncryptedAssertion` section of a SAML assertion. The encrypted data method is `Aes128`, and the encrypted key method is `Rsa15`.
+
+```xml
+<saml:EncryptedAssertion>
+ <xenc:EncryptedData xmlns:xenc="http://www.w3.org/2001/04/xmlenc#"
+ xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" Type="http://www.w3.org/2001/04/xmlenc#Element">
+ <xenc:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#aes128-cbc" />
+ <dsig:KeyInfo>
+ <xenc:EncryptedKey>
+ <xenc:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-1_5" />
+ <xenc:CipherData>
+ <xenc:CipherValue>...</xenc:CipherValue>
+ </xenc:CipherData>
+ </xenc:EncryptedKey>
+ </dsig:KeyInfo>
+ <xenc:CipherData>
+ <xenc:CipherValue>...</xenc:CipherValue>
+ </xenc:CipherData>
+ </xenc:EncryptedData>
+</saml:EncryptedAssertion>
+```
+
+You can change the format of the encrypted assertions. To configure the encryption format, set the `UseDetachedKeys` metadata key within the relying party. Possible values: `true`, or `false` (default). When the value is set to `true`, the detached keys add the encrypted assertion as a child of the `EncrytedAssertion` as opposed to the `EncryptedData`.
+
+Configure the encryption method and format, use the metadata keys within the [relying party technical profile](relyingparty.md#technicalprofile):
+
+```xml
+<RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="SAML2"/>
+ <Metadata>
+ <Item Key="DataEncryptionMethod">Aes128</Item>
+ <Item Key="KeyEncryptionMethod">Rsa15</Item>
+ <Item Key="UseDetachedKeys">false</Item>
+ </Metadata>
+ ..
+ </TechnicalProfile>
+</RelyingParty>
+```
+ ## Identity provider-initiated flow When your application expects to receive a SAML assertion without first sending a SAML AuthN request to the identity provider, you must configure Azure AD B2C for identity provider-initiated flow.
We provide a complete sample policy that you can use for testing with the SAML t
You can configure the signature algorithm used to sign the SAML assertion. Possible values are `Sha256`, `Sha384`, `Sha512`, or `Sha1`. Make sure the technical profile and application use the same signature algorithm. Use only the algorithm that your certificate supports.
-Configure the signature algorithm using the `XmlSignatureAlgorithm` metadata key within the RelyingParty metadata node.
+Configure the signature algorithm using the `XmlSignatureAlgorithm` metadata key within the relying party Metadata element.
```xml <RelyingParty>
Configure the signature algorithm using the `XmlSignatureAlgorithm` metadata key
## SAML response lifetime
-You can configure the length of time the SAML response remains valid. Set the lifetime using the `TokenLifeTimeInSeconds` metadata item within the SAML Token Issuer technical profile. This value is the number of seconds that can elapse from the `NotBefore` timestamp calculated at the token issuance time. Automatically, the time picked for this is your current time. The default lifetime is 300 seconds (5 minutes).
+You can configure the length of time the SAML response remains valid. Set the lifetime using the `TokenLifeTimeInSeconds` metadata item within the SAML Token Issuer technical profile. This value is the number of seconds that can elapse from the `NotBefore` timestamp calculated at the token issuance time. The default lifetime is 300 seconds (5 minutes).
```xml <ClaimsProvider>
For example, when the `TokenNotBeforeSkewInSeconds` is set to `120` seconds:
</TechnicalProfile> ```
+## Remove milliseconds from date and time
+
+You can specify whether the milliseconds will be removed from datetime values within the SAML response (these include IssueInstant, NotBefore, NotOnOrAfter, and AuthnInstant). To remove the milliseconds, set the `RemoveMillisecondsFromDateTime
+` metadata key within the relying party. Possible values: `false` (default) or `true`.
+
+```xml
+<ClaimsProvider>
+ <DisplayName>Token Issuer</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="Saml2AssertionIssuer">
+ <DisplayName>Token Issuer</DisplayName>
+ <Protocol Name="SAML2"/>
+ <OutputTokenFormat>SAML2</OutputTokenFormat>
+ <Metadata>
+ <Item Key="RemoveMillisecondsFromDateTime">true</Item>
+ </Metadata>
+ ...
+ </TechnicalProfile>
+```
+ ## Azure AD B2C issuer ID If you have multiple SAML applications that depend on different `entityID` values, you can override the `issueruri` value in your relying party file. To override the issuer URI, copy the technical profile with the "Saml2AssertionIssuer" ID from the base file and override the `issueruri` value.
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 02/05/2020 Last updated : 03/04/2021 # Reference for writing expressions for attribute mappings in Azure AD
-When you configure provisioning to a SaaS application, one of the types of attribute mappings that you can specify is an expression mapping. For these, you must write a script-like expression that allows you to transform your usersΓÇÖ data into formats that are more acceptable for the SaaS application.
+When you configure provisioning to a SaaS application, one of the types of attribute mappings that you can specify is an expression mapping. For these, you must write a script-like expression that allows you to transform your users' data into formats that are more acceptable for the SaaS application.
## Syntax overview
The syntax for Expressions for Attribute Mappings is reminiscent of Visual Basic
## List of Functions
-[Append](#append) &nbsp;&nbsp;&nbsp;&nbsp; [BitAnd](#bitand) &nbsp;&nbsp;&nbsp;&nbsp; [CBool](#cbool) &nbsp;&nbsp;&nbsp;&nbsp; [Coalesce](#coalesce) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToBase64](#converttobase64) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToUTF8Hex](#converttoutf8hex) &nbsp;&nbsp;&nbsp;&nbsp; [Count](#count) &nbsp;&nbsp;&nbsp;&nbsp; [CStr](#cstr) &nbsp;&nbsp;&nbsp;&nbsp; [DateFromNum](#datefromnum) &nbsp;[FormatDateTime](#formatdatetime) &nbsp;&nbsp;&nbsp;&nbsp; [Guid](#guid) &nbsp;&nbsp;&nbsp;&nbsp; [IIF](#iif) &nbsp;&nbsp;&nbsp;&nbsp;[InStr](#instr) &nbsp;&nbsp;&nbsp;&nbsp; [IsNull](#isnull) &nbsp;&nbsp;&nbsp;&nbsp; [IsNullOrEmpty](#isnullorempty) &nbsp;&nbsp;&nbsp;&nbsp; [IsPresent](#ispresent) &nbsp;&nbsp;&nbsp;&nbsp; [IsString](#isstring) &nbsp;&nbsp;&nbsp;&nbsp; [Item](#item) &nbsp;&nbsp;&nbsp;&nbsp; [Join](#join) &nbsp;&nbsp;&nbsp;&nbsp; [Left](#left) &nbsp;&nbsp;&nbsp;&nbsp; [Mid](#mid) &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp; [NormalizeDiacritics](#normalizediacritics) [Not](#not) &nbsp;&nbsp;&nbsp;&nbsp; [RemoveDuplicates](#removeduplicates) &nbsp;&nbsp;&nbsp;&nbsp; [Replace](#replace) &nbsp;&nbsp;&nbsp;&nbsp; [SelectUniqueValue](#selectuniquevalue)&nbsp;&nbsp;&nbsp;&nbsp; [SingleAppRoleAssignment](#singleapproleassignment)&nbsp;&nbsp;&nbsp;&nbsp; [Split](#split)&nbsp;&nbsp;&nbsp;&nbsp;[StripSpaces](#stripspaces) &nbsp;&nbsp;&nbsp;&nbsp; [Switch](#switch)&nbsp;&nbsp;&nbsp;&nbsp; [ToLower](#tolower)&nbsp;&nbsp;&nbsp;&nbsp; [ToUpper](#toupper)&nbsp;&nbsp;&nbsp;&nbsp; [Word](#word)
+[Append](#append) &nbsp;&nbsp;&nbsp;&nbsp; [BitAnd](#bitand) &nbsp;&nbsp;&nbsp;&nbsp; [CBool](#cbool) &nbsp;&nbsp;&nbsp;&nbsp; [Coalesce](#coalesce) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToBase64](#converttobase64) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToUTF8Hex](#converttoutf8hex) &nbsp;&nbsp;&nbsp;&nbsp; [Count](#count) &nbsp;&nbsp;&nbsp;&nbsp; [CStr](#cstr) &nbsp;&nbsp;&nbsp;&nbsp; [DateFromNum](#datefromnum) &nbsp;[FormatDateTime](#formatdatetime) &nbsp;&nbsp;&nbsp;&nbsp; [Guid](#guid) &nbsp;&nbsp;&nbsp;&nbsp; [IIF](#iif) &nbsp;&nbsp;&nbsp;&nbsp;[InStr](#instr) &nbsp;&nbsp;&nbsp;&nbsp; [IsNull](#isnull) &nbsp;&nbsp;&nbsp;&nbsp; [IsNullOrEmpty](#isnullorempty) &nbsp;&nbsp;&nbsp;&nbsp; [IsPresent](#ispresent) &nbsp;&nbsp;&nbsp;&nbsp; [IsString](#isstring) &nbsp;&nbsp;&nbsp;&nbsp; [Item](#item) &nbsp;&nbsp;&nbsp;&nbsp; [Join](#join) &nbsp;&nbsp;&nbsp;&nbsp; [Left](#left) &nbsp;&nbsp;&nbsp;&nbsp; [Mid](#mid) &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp; [NormalizeDiacritics](#normalizediacritics) [Not](#not) &nbsp;&nbsp;&nbsp;&nbsp; [NumFromDate](#numfromdate) &nbsp;&nbsp;&nbsp;&nbsp; [RemoveDuplicates](#removeduplicates) &nbsp;&nbsp;&nbsp;&nbsp; [Replace](#replace) &nbsp;&nbsp;&nbsp;&nbsp; [SelectUniqueValue](#selectuniquevalue)&nbsp;&nbsp;&nbsp;&nbsp; [SingleAppRoleAssignment](#singleapproleassignment)&nbsp;&nbsp;&nbsp;&nbsp; [Split](#split)&nbsp;&nbsp;&nbsp;&nbsp;[StripSpaces](#stripspaces) &nbsp;&nbsp;&nbsp;&nbsp; [Switch](#switch)&nbsp;&nbsp;&nbsp;&nbsp; [ToLower](#tolower)&nbsp;&nbsp;&nbsp;&nbsp; [ToUpper](#toupper)&nbsp;&nbsp;&nbsp;&nbsp; [Word](#word)
### Append
Takes a source string value and appends the suffix to the end of it.
| **source** |Required |String |Usually name of the attribute from the source object. | | **suffix** |Required |String |The string that you want to append to the end of the source value. | +
+### Append constant suffix to user name
+Example: If you are using a Salesforce Sandbox, you might need to append an additional suffix to all your user names before synchronizing them.
+
+**Expression:**
+`Append([userPrincipalName], ".test")`
+
+**Sample input/output:**
+
+* **INPUT**: (userPrincipalName): "John.Doe@contoso.com"
+* **OUTPUT**: "John.Doe@contoso.com.test"
++ ### BitAnd **Function:**
In other words, it returns 0 in all cases except when the corresponding bits of
| Name | Required/ Repeating | Type | Notes | | | | | |
-| **value1** |Required |num |Numeric value that should be ANDΓÇÖed with value2|
-| **value2** |Required |num |Numeric value that should be ANDΓÇÖed with value1|
+| **value1** |Required |num |Numeric value that should be AND'ed with value2|
+| **value2** |Required |num |Numeric value that should be AND'ed with value1|
**Example:** `BitAnd(&HF, &HF7)`
Returns the first source value that is not NULL. If all arguments are NULL and d
| **source1 … sourceN** | Required | String |Required, variable-number of times. Usually name of the attribute from the source object. | | **defaultValue** | Optional | String | Default value to be used when all source values are NULL. Can be empty string ("").
+### Flow mail value if not NULL, otherwise flow userPrincipalName
+Example: You wish to flow the mail attribute if it is present. If it is not, you wish to flow the value of userPrincipalName instead.
+
+**Expression:**
+`Coalesce([mail],[userPrincipalName])`
+
+**Sample input/output:**
+
+* **INPUT** (mail): NULL
+* **INPUT** (userPrincipalName): "John.Doe@contoso.com"
+* **OUTPUT**: "John.Doe@contoso.com"
++ ### ConvertToBase64 **Function:**
Returns "cn=Joe,dc=contoso,dc=com"
DateFromNum(value) **Description:**
-The DateFromNum function converts a value in ADΓÇÖs date format to a DateTime type.
+The DateFromNum function converts a value in AD's date format to a DateTime type.
**Parameters:**
Returns a DateTime representing January 1, 2012 at 11:00PM.
### FormatDateTime **Function:**
-FormatDateTime(source, inputFormat, outputFormat)
+FormatDateTime(source, dateTimeStyles, inputFormat, outputFormat)
**Description:** Takes a date string from one format and converts it into a different format.
Takes a date string from one format and converts it into a different format.
| Name | Required/ Repeating | Type | Notes | | | | | | | **source** |Required |String |Usually name of the attribute from the source object. |
+| **dateTimeStyles** | Optional | String | Use this to specify the formatting options that customize string parsing for some date and time parsing methods. For supported values, see [DateTimeStyles doc](/dotnet/api/system.globalization.datetimestyles). If left empty, the default value used is DateTimeStyles.RoundtripKind, DateTimeStyles.AllowLeadingWhite, DateTimeStyles.AllowTrailingWhite |
| **inputFormat** |Required |String |Expected format of the source value. For supported formats, see [/dotnet/standard/base-types/custom-date-and-time-format-strings](/dotnet/standard/base-types/custom-date-and-time-format-strings). | | **outputFormat** |Required |String |Format of the output date. | ++
+### Output date as a string in a certain format
+Example: You want to send dates to a SaaS application like ServiceNow in a certain format. You can consider using the following expression.
+
+**Expression:**
+
+`FormatDateTime([extensionAttribute1], , "yyyyMMddHHmmss.fZ", "yyyy-MM-dd")`
+
+**Sample input/output:**
+
+* **INPUT** (extensionAttribute1): "20150123105347.1Z"
+* **OUTPUT**: "2015-01-23"
++ ### Guid **Function:**
Requires one string argument. Returns the string, but with any diacritical chara
| | | | | | **source** |Required |String | Usually a first name or last name attribute. | +
+### Remove diacritics from a string
+Example: You need to replace characters containing accent marks with equivalent characters that don't contain accent marks.
+
+**Expression:**
+NormalizeDiacritics([givenName])
+
+**Sample input/output:**
+
+* **INPUT** (givenName): "Zoë"
+* **OUTPUT**: "Zoe"
++ ### Not **Function:**
The NumFromDate function converts a DateTime value to Active Directory format th
**Example:** * Workday example Assuming you want to map the attribute *ContractEndDate* from Workday which is in the format *2020-12-31-08:00* to *accountExpires* field in AD, here is how you can use this function and change the timezone offset to match your locale.
- `NumFromDate(Join("", FormatDateTime([ContractEndDate], "yyyy-MM-ddzzz", "yyyy-MM-dd"), "T23:59:59-08:00"))`
+ `NumFromDate(Join("", FormatDateTime([ContractEndDate], ,"yyyy-MM-ddzzz", "yyyy-MM-dd"), "T23:59:59-08:00"))`
* SuccessFactors example Assuming you want to map the attribute *endDate* from SuccessFactors which is in the format *M/d/yyyy hh:mm:ss tt* to *accountExpires* field in AD, here is how you can use this function and change the time zone offset to match your locale.
- `NumFromDate(Join("",FormatDateTime([endDate],"M/d/yyyy hh:mm:ss tt","yyyy-MM-dd"),"T23:59:59-08:00"))`
+ `NumFromDate(Join("",FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt","yyyy-MM-dd"),"T23:59:59-08:00"))`
Replaces values within a string in a case-sensitive manner. The function behaves
| **replacementAttributeName** |Optional |String |Name of the attribute to be used for replacement value | | **template** |Optional |String |When **template** value is provided, we will look for **oldValue** inside the template and replace it with **source** value. |
+### Replace characters using a regular expression
+Example: You need to find characters that match a regular expression value and remove them.
+
+**Expression:**
+
+Replace([mailNickname], , "[a-zA-Z_]*", , "", , )
+
+**Sample input/output:**
+
+* **INPUT** (mailNickname: "john_doe72"
+* **OUTPUT**: "72"
++ ### SelectUniqueValue **Function:**
Requires a minimum of two arguments, which are unique value generation rules def
- This is a top-level function, it cannot be nested.
+ - This function cannot be applied to attributes that have a matching precedence.
- This function is only meant to be used for entry creations. When using it with an attribute, set the **Apply Mapping** property to **Only during object creation**. - This function is currently only supported for "Workday to Active Directory User Provisioning" and "SuccessFactors to Active Directory User Provisioning". It cannot be used with other provisioning applications.
Requires a minimum of two arguments, which are unique value generation rules def
| | | | | | **uniqueValueRule1 … uniqueValueRuleN** |At least 2 are required, no upper bound |String | List of unique value generation rules to evaluate. |
+### Generate unique value for userPrincipalName (UPN) attribute
+Example: Based on the user's first name, middle name and last name, you need to generate a value for the UPN attribute and check for its uniqueness in the target AD directory before assigning the value to the UPN attribute.
+
+**Expression:**
+
+```ad-attr-mapping-expr
+ SelectUniqueValue(
+ Join("@", NormalizeDiacritics(StripSpaces(Join(".", [PreferredFirstName], [PreferredLastName]))), "contoso.com"),
+ Join("@", NormalizeDiacritics(StripSpaces(Join(".", Mid([PreferredFirstName], 1, 1), [PreferredLastName]))), "contoso.com"),
+ Join("@", NormalizeDiacritics(StripSpaces(Join(".", Mid([PreferredFirstName], 1, 2), [PreferredLastName]))), "contoso.com")
+ )
+```
+
+**Sample input/output:**
+
+* **INPUT** (PreferredFirstName): "John"
+* **INPUT** (PreferredLastName): "Smith"
+* **OUTPUT**: "John.Smith@contoso.com" if UPN value of John.Smith@contoso.com doesn't already exist in the directory
+* **OUTPUT**: "J.Smith@contoso.com" if UPN value of John.Smith@contoso.com already exists in the directory
+* **OUTPUT**: "Jo.Smith@contoso.com" if the above two UPN values already exist in the directory
++ ### SingleAppRoleAssignment
Splits a string into a multi-valued array, using the specified delimiter charact
| **source** |Required |String |**source** value to update. | | **delimiter** |Required |String |Specifies the character that will be used to split the string (example: ",") |
+### Split a string into a multi-valued array
+Example: You need to take a comma-delimited list of strings, and split them into an array that can be plugged into a multi-value attribute like Salesforce's PermissionSets attribute. In this example, a list of permission sets has been populated in extensionAttribute5 in Azure AD.
+
+**Expression:**
+Split([extensionAttribute5], ",")
+
+**Sample input/output:**
+
+* **INPUT** (extensionAttribute5): "PermissionSetOne, PermissionSetTwo"
+* **OUTPUT**: ["PermissionSetOne", "PermissionSetTwo"]
++ ### StripSpaces **Function:**
When **source** value matches a **key**, returns **value** for that **key**. If
| **key** |Required |String |**Key** to compare **source** value with. | | **value** |Required |String |Replacement value for the **source** matching the key. |
+### Replace a value based on predefined set of options
+Example: You need to define the time zone of the user based on the state code stored in Azure AD.
+If the state code doesn't match any of the predefined options, use default value of "Australia/Sydney".
+
+**Expression:**
+`Switch([state], "Australia/Sydney", "NSW", "Australia/Sydney","QLD", "Australia/Brisbane", "SA", "Australia/Adelaide")`
+
+**Sample input/output:**
+
+* **INPUT** (state): "QLD"
+* **OUTPUT**: "Australia/Brisbane"
++ ### ToLower **Function:**
Takes a *source* string value and converts it to lower case using the culture ru
| **source** |Required |String |Usually name of the attribute from the source object | | **culture** |Optional |String |The format for the culture name based on RFC 4646 is *languagecode2-country/regioncode2*, where *languagecode2* is the two-letter language code and *country/regioncode2* is the two-letter subculture code. Examples include ja-JP for Japanese (Japan) and en-US for English (United States). In cases where a two-letter language code is not available, a three-letter code derived from ISO 639-2 is used.|
+### Convert generated userPrincipalName (UPN) value to lower case
+Example: You would like to generate the UPN value by concatenating the PreferredFirstName and PreferredLastName source fields and converting all characters to lower case.
+
+`ToLower(Join("@", NormalizeDiacritics(StripSpaces(Join(".", [PreferredFirstName], [PreferredLastName]))), "contoso.com"))`
+
+**Sample input/output:**
+
+* **INPUT** (PreferredFirstName): "John"
+* **INPUT** (PreferredLastName): "Smith"
+* **OUTPUT**: "john.smith@contoso.com"
++ ### ToUpper **Function:**
Returns "has".
## Examples
+This section provides more expression function usage examples.
+ ### Strip known domain name
-You need to strip a known domain name from a userΓÇÖs email to obtain a user name.
+You need to strip a known domain name from a user's email to obtain a user name.
For example, if the domain is "contoso.com", then you could use the following expression: **Expression:**
For example, if the domain is "contoso.com", then you could use the following ex
* **INPUT** (mail): "john.doe@contoso.com" * **OUTPUT**: "john.doe"
-### Append constant suffix to user name
-If you are using a Salesforce Sandbox, you might need to append an additional suffix to all your user names before synchronizing them.
-
-**Expression:**
-`Append([userPrincipalName], ".test")`
-
-**Sample input/output:**
-
-* **INPUT**: (userPrincipalName): "John.Doe@contoso.com"
-* **OUTPUT**: "John.Doe@contoso.com.test"
### Generate user alias by concatenating parts of first and last name You need to generate a user alias by taking first 3 letters of user's first name and first 5 letters of user's last name.
You need to generate a user alias by taking first 3 letters of user's first name
* **INPUT** (surname): "Doe" * **OUTPUT**: "JohDoe"
-### Remove diacritics from a string
-You need to replace characters containing accent marks with equivalent characters that don't contain accent marks.
-
-**Expression:**
-NormalizeDiacritics([givenName])
-
-**Sample input/output:**
-
-* **INPUT** (givenName): "Zoë"
-* **OUTPUT**: "Zoe"
-
-### Split a string into a multi-valued array
-You need to take a comma-delimited list of strings, and split them into an array that can be plugged into a multi-value attribute like Salesforce's PermissionSets attribute. In this example, a list of permission sets has been populated in extensionAttribute5 in Azure AD.
-
-**Expression:**
-Split([extensionAttribute5], ",")
-
-**Sample input/output:**
-
-* **INPUT** (extensionAttribute5): "PermissionSetOne, PermissionSetTwo"
-* **OUTPUT**: ["PermissionSetOne", "PermissionSetTwo"]
-
-### Output date as a string in a certain format
-You want to send dates to a SaaS application in a certain format.
-For example, you want to format dates for ServiceNow.
-
-**Expression:**
-
-`FormatDateTime([extensionAttribute1], "yyyyMMddHHmmss.fZ", "yyyy-MM-dd")`
-
-**Sample input/output:**
-
-* **INPUT** (extensionAttribute1): "20150123105347.1Z"
-* **OUTPUT**: "2015-01-23"
-
-### Replace a value based on predefined set of options
-
-You need to define the time zone of the user based on the state code stored in Azure AD.
-If the state code doesn't match any of the predefined options, use default value of "Australia/Sydney".
-
-**Expression:**
-`Switch([state], "Australia/Sydney", "NSW", "Australia/Sydney","QLD", "Australia/Brisbane", "SA", "Australia/Adelaide")`
-
-**Sample input/output:**
-
-* **INPUT** (state): "QLD"
-* **OUTPUT**: "Australia/Brisbane"
-
-### Replace characters using a regular expression
-You need to find characters that match a regular expression value and remove them.
-
-**Expression:**
-
-Replace([mailNickname], , "[a-zA-Z_]*", , "", , )
-
-**Sample input/output:**
-
-* **INPUT** (mailNickname: "john_doe72"
-* **OUTPUT**: "72"
-
-### Convert generated userPrincipalName (UPN) value to lower case
-In the example below, the UPN value is generated by concatenating the PreferredFirstName and PreferredLastName source fields and the ToLower function operates on the generated string to convert all characters to lower case.
-
-`ToLower(Join("@", NormalizeDiacritics(StripSpaces(Join(".", [PreferredFirstName], [PreferredLastName]))), "contoso.com"))`
-
-**Sample input/output:**
-
-* **INPUT** (PreferredFirstName): "John"
-* **INPUT** (PreferredLastName): "Smith"
-* **OUTPUT**: "john.smith@contoso.com"
-
-### Generate unique value for userPrincipalName (UPN) attribute
-Based on the user's first name, middle name and last name, you need to generate a value for the UPN attribute and check for its uniqueness in the target AD directory before assigning the value to the UPN attribute.
-
-**Expression:**
-
-```ad-attr-mapping-expr
- SelectUniqueValue(
- Join("@", NormalizeDiacritics(StripSpaces(Join(".", [PreferredFirstName], [PreferredLastName]))), "contoso.com"),
- Join("@", NormalizeDiacritics(StripSpaces(Join(".", Mid([PreferredFirstName], 1, 1), [PreferredLastName]))), "contoso.com"),
- Join("@", NormalizeDiacritics(StripSpaces(Join(".", Mid([PreferredFirstName], 1, 2), [PreferredLastName]))), "contoso.com")
- )
-```
-
-**Sample input/output:**
-
-* **INPUT** (PreferredFirstName): "John"
-* **INPUT** (PreferredLastName): "Smith"
-* **OUTPUT**: "John.Smith@contoso.com" if UPN value of John.Smith@contoso.com doesn't already exist in the directory
-* **OUTPUT**: "J.Smith@contoso.com" if UPN value of John.Smith@contoso.com already exists in the directory
-* **OUTPUT**: "Jo.Smith@contoso.com" if the above two UPN values already exist in the directory
-
-### Flow mail value if not NULL, otherwise flow userPrincipalName
-You wish to flow the mail attribute if it is present. If it is not, you wish to flow the value of userPrincipalName instead.
-
-**Expression:**
-`Coalesce([mail],[userPrincipalName])`
-
-**Sample input/output:**
-
-* **INPUT** (mail): NULL
-* **INPUT** (userPrincipalName): "John.Doe@contoso.com"
-* **OUTPUT**: "John.Doe@contoso.com"
## Related Articles * [Automate User Provisioning/Deprovisioning to SaaS Apps](../app-provisioning/user-provisioning.md)
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-authenticator-app.md
The Microsoft Authenticator app provides an additional level of security to your Azure AD work or school account or your Microsoft account and is available for [Android](https://go.microsoft.com/fwlink/?linkid=866594) and [iOS](https://go.microsoft.com/fwlink/?linkid=866594). With the Microsoft Authenticator app, users can authenticate in a passwordless way during sign-in, or as an additional verification option during self-service password reset (SSPR) or Azure AD Multi-Factor Authentication events.
-Users may receive a notification through the mobile app for them to approve or deny, or use the Authenticator app to generate an OATH verification code that can be entered in a sign-in interface. If you enable both a notification and verification code, users who register the Authenticator app can use either method to verify their identity.
+Users may receive a notification through the mobile app for them to approve or deny, or use the Authenticator app to generate an OAUTH verification code that can be entered in a sign-in interface. If you enable both a notification and verification code, users who register the Authenticator app can use either method to verify their identity.
To use the Authenticator app at a sign-in prompt rather than a username and password combination, see [Enable passwordless sign-in with the Microsoft Authenticator app](howto-authentication-passwordless-phone.md).
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| Provider | Contact | | | |
-| Yubico | [https://www.yubico.com/support/contact/](https://www.yubico.com/support/contact/) |
+| Yubico | [https://www.yubico.com/solutions/passwordless/](https://www.yubico.com/solutions/passwordless/) |
| Feitian | [https://ftsafe.us/pages/microsoft](https://ftsafe.us/pages/microsoft) | | HID | [https://www.hidglobal.com/contact-us](https://www.hidglobal.com/contact-us) | | Ensurity | [https://www.ensurity.com/contact](https://www.ensurity.com/contact) |
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
To configure account lockout settings, complete the following settings:
## Block and unblock users
-If a user's device has been lost or stolen, you can block Azure AD Multi-Factor Authentication attempts for the associated account. Any Azure AD Multi-Factor Authentication attempts for blocked users are automatically denied. Users remain blocked for 90 days from the time that they are blocked.
+If a user's device has been lost or stolen, you can block Azure AD Multi-Factor Authentication attempts for the associated account. Any Azure AD Multi-Factor Authentication attempts for blocked users are automatically denied. Users remain blocked for 90 days from the time that they are blocked. We have published a video on [how to block and unblock users in your tenant](https://www.youtube.com/watch?v=WdeE1On4S1o) to show you how to do this.
### Block a user
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-deployment.md
For more information about pricing, see [Azure Active Directory pricing](https:/
| Videos| [Empower your users with better IT scalability](https://youtu.be/g9RpRnylxS8) | |[What is self-service password reset?](https://youtu.be/hc97Yx5PJiM)| | |[Deploying self-service password reset](https://www.youtube.com/watch?v=Pa0eyqjEjvQ&index=18&list=PLLasX02E8BPBm1xNMRdvP6GtA6otQUqp0)|
+| |[How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ)|
| |[How to configure self-service password reset for users in Azure AD?](https://azure.microsoft.com/resources/videos/self-service-password-reset-azure-ad/) | | |[How to [prepare users to] register [their] security information for Azure Active Directory](https://youtu.be/gXuh0XS18wA) | | Online courses|[Managing Identities in Microsoft Azure Active Directory](https://www.pluralsight.com/courses/microsoft-azure-active-directory-managing-identities) Use SSPR to give your users a modern, protected experience. See especially the "[Managing Azure Active Directory Users and Groups](https://app.pluralsight.com/library/courses/microsoft-azure-active-directory-managing-identities/table-of-contents)" module. |
Audit logs for registration and password reset are available for 30 days. If sec
* [Consider implementing Azure AD password protection](./concept-password-ban-bad.md)
-* [Consider implementing Azure AD Smart Lockout](./howto-password-smart-lockout.md)
+* [Consider implementing Azure AD Smart Lockout](./howto-password-smart-lockout.md)
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-sspr.md
# Tutorial: Enable users to unlock their account or reset passwords using Azure Active Directory self-service password reset
-Azure Active Directory (Azure AD) self-service password reset (SSPR) gives users the ability to change or reset their password, with no administrator or help desk involvement. If a user's account is locked or they forget their password, they can follow prompts to unblock themselves and get back to work. This ability reduces help desk calls and loss of productivity when a user can't sign in to their device or an application.
+Azure Active Directory (Azure AD) self-service password reset (SSPR) gives users the ability to change or reset their password, with no administrator or help desk involvement. If a user's account is locked or they forget their password, they can follow prompts to unblock themselves and get back to work. This ability reduces help desk calls and loss of productivity when a user can't sign in to their device or an application. Here's a video on [How to configure and enable self-service password reset in your tenant](https://www.youtube.com/watch?v=rA8TvhNcCvQ) (**Recommended**). We also have a video for IT administrators on [resolving the six most common end-user error messages with SSPR](https://www.youtube.com/watch?v=9RPrNVLzT8I).
> [!IMPORTANT] > This tutorial shows an administrator how to enable self-service password reset. If you're an end user already registered for self-service password reset and need to get back into your account, go to https://aka.ms/sspr.
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
# Conditional Access: Users and groups
-A Conditional Access policy must include a user assignment as one of the signals in the decision process. Users can be included or excluded from Conditional Access policies. Azure Active Directory evaluates all policies and ensures that all requirements are met before granting access to the user.
+A Conditional Access policy must include a user assignment as one of the signals in the decision process. Users can be included or excluded from Conditional Access policies. Azure Active Directory evaluates all policies and ensures that all requirements are met before granting access to the user. In addition to this article, we have a video on [how to include or exclude users from conditional access policies](https://www.youtube.com/watch?v=5DsW1hB3Jqs) that walks you through the process outlined below.
![User as a signal in the decisions made by Conditional Access](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups.png)
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/plan-conditional-access.md
The following resources may be useful as you learn about Conditional Access:
* [What is Conditional Access?](https://youtu.be/ffMAw2IVO7A) * [How to deploy Conditional Access?](https://youtu.be/c_izIRNJNuk) * [How to roll out Conditional Access policies to end users?](https://youtu.be/0_Fze7Zpyvc)
+* [How to include or exclude users from Conditional Access policies](https://youtu.be/5DsW1hB3Jqs)
* [Conditional Access with device controls](https://youtu.be/NcONUf-jeS4) * [Conditional Access with Azure AD MFA](https://youtu.be/Tbc-SU97G-w) * [Conditional Access in Enterprise Mobility + Security](https://youtu.be/A7IrxAH87wc)
Once you have collected the information, See the following resources:
[Learn more about Identity Protection](../identity-protection/overview-identity-protection.md)
-[Manage Conditional Access policies with Microsoft Graph API](/graph/api/resources/conditionalaccesspolicy)
+[Manage Conditional Access policies with Microsoft Graph API](/graph/api/resources/conditionalaccesspolicy)
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/apple-sso-plugin.md
Title: Microsoft Enterprise SSO plug-in for Apple devices
-description: Learn about Microsoft's Azure Active Directory SSO plug-in for iOS and macOS devices.
+description: Learn about Microsoft's Azure Active Directory SSO plug-in for iOS, iPadOS, and macOS devices.
>[!IMPORTANT] > This feature [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
-The *Microsoft Enterprise SSO plug-in for Apple devices* provides single sign-on (SSO) for Azure Active Directory (Azure AD) accounts across all applications that support Apple's [Enterprise Single Sign-On](https://developer.apple.com/documentation/authenticationservices) feature. Microsoft worked closely with Apple to develop this plug-in to increase your application's usability while providing the best protection that Apple and Microsoft can provide.
+The *Microsoft Enterprise SSO plug-in for Apple devices* provides single sign-on (SSO) for Azure Active Directory (Azure AD) accounts on macOS, iOS, and iPadOS across all applications that support Apple's [Enterprise Single Sign-On](https://developer.apple.com/documentation/authenticationservices) feature. This includes older applications your business might depend on but that don't yet support the latest identity libraries or protocols. Microsoft worked closely with Apple to develop this plug-in to increase your application's usability while providing the best protection that Apple and Microsoft can provide.
-In this Public Preview release, the Enterprise SSO plug-in is available only for iOS devices and is distributed in certain Microsoft applications.
+The Enterprise SSO plug-in is currently available as a built-in feature of the following apps:
+
+* [Microsoft Authenticator](../user-help/user-help-auth-app-overview.md) - iOS, iPadOS
+* Microsoft Intune [Company Portal](/mem/intune/apps/apps-company-portal-macos) - macOS
## Features The Microsoft Enterprise SSO plug-in for Apple devices offers the following benefits: - Provides SSO for Azure AD accounts across all applications that support Apple's Enterprise Single Sign-On feature.-- Delivered automatically in the Microsoft Authenticator and can be enabled by any mobile device management (MDM) solution.
+- Can be enabled by any mobile device management (MDM) solution.
+- Extends SSO to applications that do not yet use Microsoft identity platform libraries.
+- Extends SSO to applications that use OAuth2, OpenID Connect, and SAML.
## Requirements To use Microsoft Enterprise SSO plug-in for Apple devices:
+- Device must **support** and have an app that includes the the Microsoft Enterprise SSO plug-in for Apple devices **installed**:
+ - iOS 13.0+: [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md)
+ - iPadOS 13.0+: [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md)
+ - macOS 10.15+: [Intune Company Portal app](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp)
+- Device must be **MDM-enrolled** (for example, with Microsoft Intune).
+- Configuration must be **pushed to the device** to enable the Enterprise SSO plug-in on the device. This security constraint is required by Apple.
+
+### iOS requirements:
- iOS 13.0 or higher must be installed on the device.-- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications include the [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md).-- Device must be MDM-enrolled (for example, with Microsoft Intune).-- Configuration must be pushed to the device to enable the Microsoft Enterprise SSO plug-in for Apple devices on the device. This security constraint is required by Apple.
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications are the [Microsoft Authenticator app](/intune/user-help/user-help-auth-app-overview.md).
++
+### macOS requirements:
+- macOS 10.15 or higher must be installed on the device.
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications include the [Intune Company Portal app](/intune/user-help/enroll-your-device-in-intune-macos-cp.md).
## Enable the SSO plug-in with mobile device management (MDM)
-To enable the Microsoft Enterprise SSO plug-in for Apple devices, your devices need to be sent a signal through an MDM service. Since Microsoft includes the Enterprise SSO plug-in in the [Microsoft Authenticator app](..//user-help/user-help-auth-app-overview.md), use your MDM to configure the app to enable the Microsoft Enterprise SSO plug-in.
+### Microsoft Intune configuration
-Use the following parameters to configure the Microsoft Enterprise SSO plug-in for Apple devices:
+If you use Microsoft Intune as your MDM service, you can use built-in configuration profile settings to enable the Microsoft Enterprise SSO plug-in.
+
+First, configure the [Single sign-on app extension](/mem/intune/configuration/device-features-configure#single-sign-on-app-extension) settings of a configuration profile and [assign the profile to a user or device group](/mem/intune/configuration/device-profile-assign) (if not already assigned).
+
+The profile settings that enable the SSO plug-in are automatically applied to the group's devices the next time each device checks in with Intune.
+
+### Manual configuration for other MDM services
+
+If you're not using Microsoft Intune for mobile device management, use the following parameters to configure the Microsoft Enterprise SSO plug-in for Apple devices.
+
+#### iOS settings:
-- **Type**: Redirect - **Extension ID**: `com.microsoft.azureauthenticator.ssoextension` - **Team ID**: (this field is not needed for iOS)-- **URLs**:+
+#### macOS settings:
+
+- **Extension ID**: `com.microsoft.CompanyPortalMac.ssoextension`
+- **Team ID**: `UBF8T346G9`
+
+#### Common settings:
+
+- **Type**: Redirect
- `https://login.microsoftonline.com` - `https://login.microsoft.com` - `https://sts.windows.net`
Use the following parameters to configure the Microsoft Enterprise SSO plug-in f
### Additional configuration options Additional configuration options can be added to extend SSO functionality to additional apps.
-#### Enable SSO for apps that don't use MSAL
+#### Enable SSO for apps that don't use a Microsoft identity platform library
The SSO plug-in allows any application to participate in single sign-on even if it was not developed using a Microsoft SDK like the Microsoft Authentication Library (MSAL).
-The SSO plug-in is installed automatically by devices that have downloaded the Microsoft Authenticator app and registered their device with your organization. Your organization likely uses the Authenticator app today for scenarios like multi-factor authentication, password-less authentication, and conditional access. It can be turned on for your applications using any MDM provider, although Microsoft has made it easy to configure inside the Microsoft Endpoint Manager of Intune. An allow list is used to configure these applications to use the SSO plugin installed by the Authenticator app.
+The SSO plug-in is installed automatically by devices that have downloaded the Microsoft Authenticator app on iOS and iPadOS or Intune Company Portal app on macOS and registered their device with your organization. Your organization likely uses the Authenticator app today for scenarios like multi-factor authentication, password-less authentication, and conditional access. It can be turned on for your applications using any MDM provider, although Microsoft has made it easy to configure inside the Microsoft Endpoint Manager of Intune. An allow list is used to configure these applications to use the SSO plugin.
-Only apps that use native Apple network technologies or webviews are supported. If an application ships its own network layer implementation, Microsoft Enterprise SSO plug-in is not supported.
+>[!IMPORTANT]
+> Only apps that use native Apple network technologies or webviews are supported. If an application ships its own network layer implementation, Microsoft Enterprise SSO plug-in is not supported.
+
+Use the following parameters to configure the Microsoft Enterprise SSO plug-in for apps that don't use a Microsoft identity platform library:
-Use the following parameters to configure the Microsoft Enterprise SSO plug-in for apps that don't use MSAL:
+If you want to provide a list of specific apps:
- **Key**: `AppAllowList` - **Type**: `String` - **Value**: Comma-delimited list of application bundle IDs for the applications that are allowed to participate in the SSO - **Example**: `com.contoso.workapp, com.contoso.travelapp`
+Or if you want to provide a list of prefixes:
+- **Key**: `AppPrefixAllowList`
+- **Type**: `String`
+- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in the SSO. Note that this will enable all apps starting with a particular prefix to participate in the SSO
+- **Example**: `com.contoso., com.fabrikam.`
+ [Consented apps](./application-consent-experience.md) that are allowed by the MDM admin to participate in the SSO can silently get a token for the end user. Therefore, it is important to only add trusted applications to the allow list.
-You don't need to add applications that use MSAL or ASWebAuthenticationSession to this list. Those applications are enabled by default.
+>[!NOTE]
+> You don't need to add applications that use MSAL or ASWebAuthenticationSession to this list. Those applications are enabled by default.
+
+##### How to discover app bundle identifiers on iOS devices
+
+Apple does not provide an easy way to discover Bundle IDs from the App Store. The easiest way to discover the Bundle IDs of the apps who want to use for SSO is to ask your vendor or app developer. If that option is not available, you can use your MDM configuration to discover the Bundle IDs.
-#### Allow creating SSO session from any application
+Temporarily enable following flag in your MDM configuration:
-By default, the Microsoft Enterprise SSO plug-in provides SSO for authorized apps only when the SSO plug-in already has a shared credential. The Microsoft Enterprise SSO plug-in can acquire a shared credential when it is called by another ADAL or MSAL-based application during token acquisition. Most of the Microsoft apps use Microsoft Authenticator or SSO plug-in. That means that by default SSO outside of native app flows is best effort.ΓÇ»
+- **Key**: `admin_debug_mode_enabled`
+- **Type**: `Integer`
+- **Value**: 1 or 0
-Enabling `browser_sso_interaction_enabled` flag enables non-MSAL apps and Safari browser to do the initial bootstrapping and get a shared credential. If the Microsoft Enterprise SSO plug-in doesnΓÇÖt have a shared credential yet, it will try to get one whenever a sign-in is requested from an Azure AD URL inside Safari browser, ASWebAuthenticationSession, SafariViewController, or another permitted native application.ΓÇ»
+When this flag is on sign-in to iOS apps on the device you want to know the Bundle ID for. Then open Microsoft Authenticator app -> Help -> Send logs -> View logs.
+
+In the log file, look for following line:
+
+`[ADMIN MODE] SSO extension has captured following app bundle identifiers:`
+
+This should capture all application bundle identifiers visible to the SSO extension. You can then use those identifiers to configure the SSO for those apps.
+
+#### Allow user to sign-in from unknown applications and the Safari browser.
+
+By default the Microsoft Enterprise SSO plug-in provides SSO for authorized apps only when a user has signed in from an app that uses a Microsoft identity platform library like ADAL or MSAL. The Microsoft Enterprise SSO plug-in can also acquire a shared credential when it is called by another app that uses a Microsoft identity platform library during a new token acquisition.
+
+Enabling `browser_sso_interaction_enabled` flag enables app that do not use a Microsoft identity platform library to do the initial bootstrapping and get a shared credential. It also allows Safari browser to do the initial bootstrapping and get a shared credential. If the Microsoft Enterprise SSO plug-in doesnΓÇÖt have a shared credential yet, it will try to get one whenever a sign-in is requested from an Azure AD URL inside Safari browser, ASWebAuthenticationSession, SafariViewController, or another permitted native application.
- **Key**: `browser_sso_interaction_enabled` - **Type**: `Integer` - **Value**: 1 or 0
-We recommend enabling this flag to get more consistent experience across all apps. It is disabled by default.
+For macOS this setting is required to get a more consistent experience across all apps. For iOS and iPadOS this setting isn't required as most apps use the Microsoft Authenticator application for sign-in. However, if you have some applications that do not use the Microsoft Authenticator on iOS or iPadOS this flag will improve the experience so we recommend you enable the setting. It is disabled by default.
+
+#### Disable asking for MFA on initial bootstrapping
+
+By default the Microsoft Enterprise SSO plug-in always prompts the user for Multi-factor authentication (MFA) when doing the initial bootstrapping and getting a shared credential, even if it's not required for the current application the user has launched. This is so the shared credential can be easily used across all additional applications without prompting the user if MFA becomes required later. This reduces the times the user needs to be prompted on the device and is generally a good decision.
+
+Enabling `browser_sso_disable_mfa` turns this off and will only prompt the user when MFA is required by an application or resource.
+
+- **Key**: `browser_sso_disable_mfa`
+- **Type**: `Integer`
+- **Value**: 1 or 0
+
+We recommend keeping this flag disabled as it reduces the times the user needs to be prompted on the device. If your organization rarely uses MFA you may want to enable the flag, but we'd recommend you use MFA more frequently instead. For this reason, it is disabled by default.
#### Disable OAuth2 application prompts
-The Microsoft Enterprise SSO plug-in provides SSO by appending shared credentials to network requests coming from allowed applications. Some OAuth2 applications might be enforcing end-user prompt on the protocol layer. Shared credential would be ignored for those apps.
+The Microsoft Enterprise SSO plug-in provides SSO by appending shared credentials to network requests coming from allowed applications. However, some OAuth2 applications might incorrectly enforce end-user prompts at the protocol layer. If this is happening, you'll see that shared credentials are ignored for those apps and your user is prompted to sign in even though the Microsoft Enterprise SSO plug-in is working for other applications.
Enabling `disable_explicit_app_prompt` flag restricts ability of both native and web applications to force an end-user prompt on the protocol layer and bypass SSO.
Enabling `disable_explicit_app_prompt` flag restricts ability of both native and
We recommend enabling this flag to get more consistent experience across all apps. It is disabled by default.
+#### Enable SSO through cookies for specific application
+
+A small number of apps might be incompatible with the SSO extension. Specifically, apps that have advanced network settings might experience unexpected issues when they are enabled for the SSO (e.g. you might see an error that network request got canceled or interrupted).
+
+If you are experiencing problems signing in using method described in the `Enable SSO for apps that don't use MSAL` section, you could try alternative configuration for those apps.
+
+Use the following parameters to configure the Microsoft Enterprise SSO plug-in for those specific apps:
+
+- **Key**: `AppCookieSSOAllowList`
+- **Type**: `String`
+- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in the SSO. Note that this will enable all apps starting with a particular prefix to participate in the SSO
+- **Example**: `com.contoso.myapp1, com.fabrikam.myapp2`
+
+Note that applications enabled for the SSO using this mechanism need to be added both to the `AppCookieSSOAllowList` and `AppPrefixAllowList`.
+
+We recommend trying this option only for applications experiencing unexpected sign-in failures.
+ #### Use Intune for simplified configuration
-You can use Microsoft Intune as your MDM service to ease configuration of the Microsoft Enterprise SSO plug-in. For more information, see the [Intune configuration documentation](/intune/configuration/ios-device-features-settings).
+As stated before, you can use Microsoft Intune as your MDM service to ease configuration of the Microsoft Enterprise SSO plug-in including enabling the plug-in and adding your older apps to an allow list so they get SSO. For more information, see the [Intune configuration documentation](/intune/configuration/ios-device-features-settings).
## Using the SSO plug-in in your application
-The [Microsoft Authentication Library (MSAL) for Apple devices](https://github.com/AzureAD/microsoft-authentication-library-for-objc) version 1.1.0 and higher supports the Microsoft Enterprise SSO plug-in for Apple devices.
+The [Microsoft Authentication Library (MSAL) for Apple devices](https://github.com/AzureAD/microsoft-authentication-library-for-objc) version 1.1.0 and higher supports the Microsoft Enterprise SSO plug-in for Apple devices. It is the recommended way to add support for the Microsoft Enterprise SSO plug-in and ensures you get the full capabilities of the Microsoft identity platform.
If you're building an application for Frontline Worker scenarios, see [Shared device mode for iOS devices](msal-ios-shared-devices.md) for additional setup of the feature.
The Microsoft Enterprise SSO plug-in relies on the [Apple's Enterprise Single Si
Native applications can also implement custom operations and talk directly to the SSO plug-in. You can learn about Single Sign-in framework in this [2019 WWDC video from Apple](https://developer.apple.com/videos/play/tech-talks/301/)
-### Applications that use MSAL
+### Applications that use a Microsoft identity platform library
The [Microsoft Authentication Library (MSAL) for Apple devices](https://github.com/AzureAD/microsoft-authentication-library-for-objc) version 1.1.0 and higher supports the Microsoft Enterprise SSO plug-in for Apple devices natively for work and school accounts.
There's no special configuration needed if you've followed [all recommended step
If the SSO plug-in is not enabled by MDM, but the Microsoft Authenticator app is present on the device, MSAL will instead use the Microsoft Authenticator app for any interactive token requests. The SSO plug-in shares SSO with the Microsoft Authenticator app.
-### Applications that don't use MSAL
+### Applications that don't use a Microsoft identity platform library
-Applications that don't use MSAL can still get SSO if an administrator adds them to the allow list explicitly.
+Applications that don't use a Microsoft identity platform library like MSAL can still get SSO if an administrator adds them to the allow list explicitly.
There are no code changes needed in those apps as long as following conditions are satisfied:
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-support-help-options.md
If you have a development-related question, you may be able to find the answer i
### Scoped search +
+For faster results, scope your search to [Microsoft Q&A](https://docs.microsoft.com/answers/products/) the documentation, and the code samples by using the following query in your favorite search engine:
+ For faster results, scope your search to [Microsoft Q&A](/answers/products/)the documentation, and the code samples by using the following query in your favorite search engine: + ``` {Your Search Terms} (site:http://www.docs.microsoft.com/answers/products/ OR site:docs.microsoft.com OR site:github.com/azure-samples OR site:cloudidentity.com OR site:developer.microsoft.com/graph) ```
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to an ASP.NET web app | Azure"
-description: In this quickstart, learn how to implement Microsoft sign-in on an ASP.NET web app using OpenID Connect.
+description: In this quickstart, learn how to implement Microsoft sign-in on an ASP.NET web app by using OpenID Connect.
Last updated 09/25/2020
-#Customer intent: As an application developer, I want to know how to write an ASP.NET web app that can sign in personal accounts, as well as work and school accounts from any Azure Active Directory instance.
+#Customer intent: As an application developer, I want to know how to write an ASP.NET web app that can sign in personal accounts, as well as work and school accounts, from any Azure Active Directory instance.
# Quickstart: Add Microsoft identity platform sign-in to an ASP.NET web app In this quickstart, you download and run a code sample that demonstrates how an ASP.NET web app can sign in users from any Azure Active Directory (Azure AD) organization.
-See [How the sample works](#how-the-sample-works) for an illustration.
> [!div renderon="docs"]
+> The following diagram shows how the sample app works:
+>
+> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
+>
> ## Prerequisites > > * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) > * [.NET Framework 4.7.2+](https://dotnet.microsoft.com/download/visual-studio-sdks) >
-> ## Register and download your quickstart app
-> You have two options to start your quickstart application:
-> * [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-your-code-sample)
-> * [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
+> ## Register and download the app
+> You have two options to start building your application: automatic or manual configuration.
>
-> ### Option 1: Register and auto configure your app and then download your code sample
+> ### Automatic configuration
+> If you want to automatically configure your app and then download the code sample, follow these steps:
>
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal page for app registration</a>.
> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application for you in one click.
+> 1. Follow the instructions to download and automatically configure your new application in one click.
>
-> ### Option 2: Register and manually configure your application and code sample
+> ### Manual configuration
+> If you want to manually configure your application and code sample, use the following procedures.
> > #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
> > 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: on the top menu to select the tenant in which you want to register the application.
> 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `ASPNET-Quickstart`. Users of your app might see this name, and you can change it later.
-> 1. Add `https://localhost:44368/` in **Redirect URI**, and select **Register**.
+> 1. For **Name**, enter a name for your application. For example, enter **ASPNET-Quickstart**. Users of your app will see this name, and you can change it later.
+> 1. Add **https://localhost:44368/** in **Redirect URI**, and select **Register**.
> 1. Under **Manage**, select **Authentication**. > 1. In the **Implicit grant and hybrid flows** section, select **ID tokens**. > 1. Select **Save**. > [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure your application in Azure portal
-> For the code sample in this quickstart to work, add a **Redirect URI** of `https://localhost:44368/`.
-
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, enter **https://localhost:44368/** for **Redirect URI**.
+>
> > [!div renderon="portal" id="makechanges" class="nextstepaction"] > > [Make this change for me]() > > > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute
+> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute.
-#### Step 2: Download your project
+#### Step 2: Download the project
> [!div renderon="docs"] > [Download the Visual Studio 2019 solution](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-DotNet/archive/master.zip) > [!div renderon="portal" class="sxs-lookup"]
-> Run the project using Visual Studio 2019.
+> Run the project by using Visual Studio 2019.
> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"] > [Download the code sample](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-DotNet/archive/master.zip) > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run
-> We have configured your project with values of your app's properties.
+> We've configured your project with values of your app's properties.
> [!div renderon="docs"] > #### Step 3: Run your Visual Studio project
-1. Extract the zip file to a local folder closer to the root folder - for example, **C:\Azure-Samples**
-1. Open the solution in Visual Studio (AppModelv2-WebApp-OpenIDConnect-DotNet.sln)
-1. Depending on the version of Visual Studio, you might need to right click on the project `AppModelv2-WebApp-OpenIDConnect-DotNet` and **Restore NuGet packages**
-1. Open the Package Manager Console (View -> Other Windows -> Package Manager Console) and run `Update-Package Microsoft.CodeDom.Providers.DotNetCompilerPlatform -r`
+1. Extract the .zip file to a local folder that's close to the root folder. For example, extract to *C:\Azure-Samples*.
+
+ We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+2. Open the solution in Visual Studio (*AppModelv2-WebApp-OpenIDConnect-DotNet.sln*).
+3. Depending on the version of Visual Studio, you might need to right-click the project **AppModelv2-WebApp-OpenIDConnect-DotNet** and then select **Restore NuGet packages**.
+4. Open the Package Manager Console by selecting **View** > **Other Windows** > **Package Manager Console**. Then run `Update-Package Microsoft.CodeDom.Providers.DotNetCompilerPlatform -r`.
> [!div renderon="docs"]
-> 5. Edit **Web.config** and replace the parameters `ClientId` and `Tenant` with:
+> 5. Edit *Web.config* and replace the parameters `ClientId`, `Tenant`, and `redirectUri` with:
> ```xml > <add key="ClientId" value="Enter_the_Application_Id_here" /> > <add key="Tenant" value="Enter_the_Tenant_Info_Here" />
+> <add key="redirectUri" value="https://localhost:44368/" />
> ```
-> Where:
-> - `Enter_the_Application_Id_here` - is the Application Id for the application you registered.
-> - `Enter_the_Tenant_Info_Here` - is one of the options below:
-> - If your application supports **My organization only**, replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.onmicrosoft.com)
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`
-> - If your application supports **All Microsoft account users**, replace this value with `common`
+> In that code:
+>
+> - `Enter_the_Application_Id_here` is the application (client) ID of the app registration that you created earlier. Find the application (client) ID on the app's **Overview** page in **App registrations** in the Azure portal.
+> - `Enter_the_Tenant_Info_Here` is one of the following options:
+> - If your application supports **My organization only**, replace this value with the directory (tenant) ID or tenant name (for example, `contoso.onmicrosoft.com`). Find the directory (tenant) ID on the app's **Overview** page in **App registrations** in the Azure portal.
+> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+> - If your application supports **All Microsoft account users**, replace this value with `common`.
+> - `redirectUri` is the **Redirect URI** you entered earlier in **App registrations** in the Azure portal.
>
-> > [!TIP]
-> > - To find the values of *Application ID*, *Directory (tenant) ID*, and *Supported account types*, go to the **Overview** page
-> > - Ensure the value for `redirectUri` in the **Web.config** corresponds with the **Redirect URI** defined for the App Registration in Azure AD (if not, navigate to the **Authentication** menu for the App Registration and update the **REDIRECT URI** to match)
> [!div class="sxs-lookup" renderon="portal"] > > [!NOTE]
See [How the sample works](#how-the-sample-works) for an illustration.
## More information
-This section gives an overview of the code required to sign-in users. This overview can be useful to understand how the code works, main arguments, and also if you want to add sign-in to an existing ASP.NET application.
+This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET application.
-### How the sample works
-![Shows how the sample app generated by this quickstart works](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
+> [!div class="sxs-lookup" renderon="portal"]
+> ### How the sample works
+>
+> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
### OWIN middleware NuGet packages
-You can set up the authentication pipeline with cookie-based authentication using OpenID Connect in ASP.NET with OWIN Middleware packages. You can install these packages by running the following commands in Visual Studio's **Package Manager Console**:
+You can set up the authentication pipeline with cookie-based authentication by using OpenID Connect in ASP.NET with OWIN middleware packages. You can install these packages by running the following commands in Package Manager Console within Visual Studio:
```powershell Install-Package Microsoft.Owin.Security.OpenIdConnect
Install-Package Microsoft.Owin.Security.Cookies
Install-Package Microsoft.Owin.Host.SystemWeb ```
-### OWIN Startup Class
+### OWIN startup class
-The OWIN middleware uses a *startup class* that runs when the hosting process initializes. In this quickstart, the *startup.cs* file located in root folder. The following code shows the parameter used by this quickstart:
+The OWIN middleware uses a *startup class* that runs when the hosting process starts. In this quickstart, the *startup.cs* file is in the root folder. The following code shows the parameters that this quickstart uses:
```csharp public void Configuration(IAppBuilder app)
public void Configuration(IAppBuilder app)
app.UseOpenIdConnectAuthentication( new OpenIdConnectAuthenticationOptions {
- // Sets the ClientId, authority, RedirectUri as obtained from web.config
+ // Sets the client ID, authority, and redirect URI as obtained from Web.config
ClientId = clientId, Authority = authority, RedirectUri = redirectUri,
- // PostLogoutRedirectUri is the page that users will be redirected to after sign-out. In this case, it is using the home page
+ // PostLogoutRedirectUri is the page that users will be redirected to after sign-out. In this case, it's using the home page
PostLogoutRedirectUri = redirectUri, Scope = OpenIdConnectScope.OpenIdProfile,
- // ResponseType is set to request the code id_token - which contains basic information about the signed-in user
+ // ResponseType is set to request the code id_token, which contains basic information about the signed-in user
ResponseType = OpenIdConnectResponseType.CodeIdToken, // ValidateIssuer set to false to allow personal and work accounts from any organization to sign in to your application
- // To only allow users from a single organizations, set ValidateIssuer to true and 'tenant' setting in web.config to the tenant name
- // To allow users from only a list of specific organizations, set ValidateIssuer to true and use ValidIssuers parameter
+ // To only allow users from a single organization, set ValidateIssuer to true and the 'tenant' setting in Web.config to the tenant name
+ // To allow users from only a list of specific organizations, set ValidateIssuer to true and use the ValidIssuers parameter
TokenValidationParameters = new TokenValidationParameters() { ValidateIssuer = false // Simplification (see note below) },
- // OpenIdConnectAuthenticationNotifications configures OWIN to send notification of failed authentications to OnAuthenticationFailed method
+ // OpenIdConnectAuthenticationNotifications configures OWIN to send notification of failed authentications to the OnAuthenticationFailed method
Notifications = new OpenIdConnectAuthenticationNotifications { AuthenticationFailed = OnAuthenticationFailed
public void Configuration(IAppBuilder app)
> |Where | Description | > |||
-> | `ClientId` | Application ID from the application registered in the Azure portal |
-> | `Authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}/v2.0` for public cloud, where {tenant} is the name of your tenant, your tenant Id, or *common* for a reference to the common endpoint (used for multi-tenant applications) |
-> | `RedirectUri` | URL where users are sent after authentication against the Microsoft identity platform |
-> | `PostLogoutRedirectUri` | URL where users are sent after signing-off |
-> | `Scope` | The list of scopes being requested, separated by spaces |
-> | `ResponseType` | Request that the response from authentication contains an Authorization Code and an ID token |
-> | `TokenValidationParameters` | A list of parameters for token validation. In this case, `ValidateIssuer` is set to `false` to indicate that it can accept sign-ins from any personal, or work or school account types |
-> | `Notifications` | A list of delegates that can be executed on different *OpenIdConnect* messages |
+> | `ClientId` | The application ID from the application registered in the Azure portal. |
+> | `Authority` | The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}/v2.0` for the public cloud. In that URL, *{tenant}* is the name of your tenant, your tenant ID, or `common` for a reference to the common endpoint. (The common endpoint is used for multitenant applications.) |
+> | `RedirectUri` | The URL where users are sent after authentication against the Microsoft identity platform. |
+> | `PostLogoutRedirectUri` | The URL where users are sent after signing off. |
+> | `Scope` | The list of scopes being requested, separated by spaces. |
+> | `ResponseType` | The request that the response from authentication contains an authorization code and an ID token. |
+> | `TokenValidationParameters` | A list of parameters for token validation. In this case, `ValidateIssuer` is set to `false` to indicate that it can accept sign-ins from any personal, work, or school account type. |
+> | `Notifications` | A list of delegates that can be run on `OpenIdConnect` messages. |
> [!NOTE]
-> Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications, validate the issuer.
-> See the samples to understand how to do that.
+> Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications, validate the issuer. See the samples to understand how to do that.
-### Initiate an authentication challenge
+### Authentication challenge
You can force a user to sign in by requesting an authentication challenge in your controller:
public void SignIn()
``` > [!TIP]
-> Requesting an authentication challenge using the method above is optional and normally used when you want a view to be accessible from both authenticated and non-authenticated users. Alternatively, you can protect controllers by using the method described in the next section.
+> Requesting an authentication challenge by using this method is optional. You'd normally use it when you want a view to be accessible from both authenticated and unauthenticated users. Alternatively, you can protect controllers by using the method described in the next section.
-### Protect a controller or a controller's method
+### Attribute for protecting a controller or a controller actions
-You can protect a controller or controller actions using the `[Authorize]` attribute. This attribute restricts access to the controller or actions by allowing only authenticated users to access the actions in the controller, which means that authentication challenge will happen automatically when a *non-authenticated* user tries to access one of the actions or controller decorated by the `[Authorize]` attribute.
+You can protect a controller or controller actions by using the `[Authorize]` attribute. This attribute restricts access to the controller or actions by allowing only authenticated users to access the actions in the controller. An authentication challenge will then happen automatically when an unauthenticated user tries to access one of the actions or controllers decorated by the `[Authorize]` attribute.
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] ## Next steps
-Try out the ASP.NET tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
+For a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart, try out the ASP.NET tutorial.
> [!div class="nextstepaction"] > [Add sign-in to an ASP.NET web app](tutorial-v2-asp-webapp.md)
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
You can limit sign-in access to your application to user accounts that are in a
### Option 2: Use a custom method to validate issuers
-You can implement a custom method to validate issuers by using the `IssuerValidator` parameter. For more information about this parameter, see [TokenValidationParameters class](/dotnet/api/microsoft.identitymodel.tokens.tokenvalidationparameters?view=azure-dotnet&preserve-view=true).
+You can implement a custom method to validate issuers by using the `IssuerValidator` parameter. For more information about this parameter, see [TokenValidationParameters class](/dotnet/api/microsoft.identitymodel.tokens.tokenvalidationparameters).
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my JavaScript app can sign in users of personal accounts, work accounts, and school accounts.
-# Quickstart: Sign in users and get an access token in a JavaScript SPA using the auth code flow
+# Quickstart: Sign in users and get an access token in a JavaScript SPA using the auth code flow with PKCE
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
See [How the sample works](#how-the-sample-works) for an illustration.
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
The following samples illustrate web applications that sign in users. Some sampl
| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png)</p>ASP.NET Core | [ASP.NET Core WebApp signs-in users tutorial](https://aka.ms/aspnetcore-webapp-sign-in) | Same sample in the [ASP.NET Core web app calls Microsoft Graph](https://aka.ms/aspnetcore-webapp-call-msgraph) phase</p>Advanced sample [Accessing the logged-in user's token cache from background apps, APIs and services](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | | ![This image shows the ASP.NET Framework logo](media/sample-v2-code/logo_NETframework.png)</p>ASP.NET Core | [AD FS to Azure AD application migration playbook for developers](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) to learn how to safely and securely migrate your applications integrated with Active Directory Federation Services (AD FS) to Azure Active Directory (Azure AD) | | | ![This image shows the ASP.NET Framework logo](media/sample-v2-code/logo_NETframework.png)</p> ASP.NET | [ASP.NET Quickstart](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) </p> [dotnet-webapp-openidconnect-v2](https://github.com/azure-samples/active-directory-dotnet-webapp-openidconnect-v2) | [dotnet-admin-restricted-scopes-v2](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) </p> |[msgraph-training-aspnetmvcapp](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp)
+| ![This image shows the Java logo](media/sample-v2-code/logo_java.png) |[Java Servlet web app chapter wise tutorial - Chapter 1](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication)| [Java Servlet web app chapter wise tutorial - Chapter 2](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication) |
| ![This image shows the Java logo](media/sample-v2-code/logo_java.png) | | [ms-identity-java-webapp](https://github.com/Azure-Samples/ms-identity-java-webapp) | | ![This image shows the Java logo](media/sample-v2-code/logo_java.png) | [ms-identity-b2c-java-servlet-webapp-authentication](https://github.com/Azure-Samples/ms-identity-b2c-java-servlet-webapp-authentication)| | | ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>Node.js (MSAL Node) | [Express web app signs-in users tutorial](https://github.com/Azure-Samples/ms-identity-node) | |
active-directory Scenario Protected Web Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
services.AddControllers();
> - `$"api://{ClientId}` in all other cases (for v1.0 [access tokens](access-tokens.md)). > For details, see Microsoft.Identity.Web [source code](https://github.com/AzureAD/microsoft-identity-web/blob/d2ad0f5f830391a34175d48621a2c56011a45082/src/Microsoft.Identity.Web/Resource/RegisterValidAudience.cs#L70-L83).
-The preceding code snippet is extracted from the [ASP.NET Core web API incremental tutorial](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/blob/63087e83326e6a332d05fee6e1586b66d840b08f/1.%20Desktop%20app%20calls%20Web%20API/TodoListService/Startup.cs#L23-L28). The detail of **AddMicrosoftIdentityWebApiAuthentication** is available in [Microsoft.Identity.Web](microsoft-identity-web.md). This method calls [AddMicrosoftIdentityWebAPI](/dotnet/api/microsoft.identity.web.microsoftidentitywebapiauthenticationbuilderextensions.addmicrosoftidentitywebapi?preserve-view=true&view=azure-dotnet-preview), which itself instructs the middleware on how to validate the token.
+The preceding code snippet is extracted from the [ASP.NET Core web API incremental tutorial](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/blob/63087e83326e6a332d05fee6e1586b66d840b08f/1.%20Desktop%20app%20calls%20Web%20API/TodoListService/Startup.cs#L23-L28). The detail of **AddMicrosoftIdentityWebApiAuthentication** is available in [Microsoft.Identity.Web](microsoft-identity-web.md). This method calls [AddMicrosoftIdentityWebAPI](/dotnet/api/microsoft.identity.web.microsoftidentitywebapiauthenticationbuilderextensions.addmicrosoftidentitywebapi), which itself instructs the middleware on how to validate the token.
## Token validation
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-overview.md
For developers, the Microsoft identity platform offers integration of modern inn
With the Microsoft identity platform, you can write code once and reach any user. You can build an app once and have it work across many platforms, or build an app that functions as a client as well as a resource application (API).
+For a video overview of the platform and a demo of the authentication experience, see [What is the Microsoft identity platform for developers?](https://youtu.be/uDU1QTSw7Ps).
+ ## Getting started Choose the [application scenario](authentication-flows-app-scenarios.md) you'd like to build. Each of these scenario paths starts with an overview and links to a quickstart to help you get up and running:
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
Previously updated : 03/02/2021 Last updated : 03/04/2021 -
Guest users can now sign in to your multi-tenant or Microsoft first-party apps t
![Common endpoint sign-in](media/redemption-experience/common-endpoint-flow-small.png) The user is then redirected to your tenanted endpoint, where they can either sign in with their email address or select an identity provider you've configured.+ ## Redemption through a direct link As an alternative to the invitation email or an application's common URL, you can give a guest a direct link to your app or portal. You first need to add the guest user to your directory via the [Azure portal](./b2b-quickstart-add-guest-users-portal.md) or [PowerShell](./b2b-quickstart-invite-powershell.md). Then you can use any of the [customizable ways to deploy applications to users](../manage-apps/end-user-experiences.md), including direct sign-on links. When a guest uses a direct link instead of the invitation email, theyΓÇÖll still be guided through the first-time consent experience.
When a user clicks the **Accept invitation** link in an [invitation email](invit
3. If an admin has enabled [Google federation](./google-federation.md), Azure AD checks if the userΓÇÖs domain suffix is gmail.com or googlemail.com and redirects the user to Google.
-4. The redemption process checks if the user has an existing personal [Microsoft account (MSA)](https://support.microsoft.com/help/4026324/microsoft-account-how-to-create).
+4. The redemption process checks if the user has an existing personal [Microsoft account (MSA)](https://support.microsoft.com/help/4026324/microsoft-account-how-to-create) for just-in-time (JIT) redemptions, but not for invitation email link redemption. If the user already has an existing MSA, they'll sign in with their existing MSA.
5. Once the userΓÇÖs **home directory** is identified, the user is sent to the corresponding identity provider to sign in.
active-directory Add Users Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/add-users-azure-active-directory.md
Previously updated : 11/12/2019 Last updated : 03/05/2021 -+
If you have an environment with both Azure Active Directory (cloud) and Windows
You can delete an existing user using Azure Active Directory portal.
+>[!Note]
+>You must have a Global administrator or User administrator role assignment to delete users in your organization. Global admins can delete any users including other admins. User administrators can delete any non-admin users, Helpdesk administrators and other User administrators. For more information, see [Administrator role permissions in Azure AD](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference).
+ To delete a user, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com/) using a User administrator account for the organization.
The user is deleted and no longer appears on the **Users - All users** page. The
When a user is deleted, any licenses consumed by the user are made available for other users. >[!Note]
->You must use Windows Server Active Directory to update the identity, contact information, or job information for users whose source of authority is Windows Server Active Directory. After you complete your update, you must wait for the next synchronization cycle to complete before you'll see the changes.
+>To update the identity, contact information, or job information for users whose source of authority is Windows Server Active Directory, you must use Windows Server Active Directory. After you complete the update, you must wait for the next synchronization cycle to complete before you'll see the changes.
## Next steps
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
To change the request and approval settings for an access package, you need to o
1. Click **Next**.
-1. If you want to require requestors to provide additional information when requesting access to an access package, use the steps in []() to configure requestor information (preview).
+1. If you want to require requestors to provide additional information when requesting access to an access package, use the steps in [Change approval and requestor information (preview) settings for an access package in Azure AD entitlement management](entitlement-management-access-package-approval-policy.md#collect-additional-requestor-information-for-approval-preview)
+ to configure requestor information (preview).
1. Configure lifecycle settings.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
### Connectivity * The Azure AD Connect server needs DNS resolution for both intranet and internet. The DNS server must be able to resolve names both to your on-premises Active Directory and the Azure AD endpoints.
+* Azure AD Connect requires network connectivity to all configured domains
* If you have firewalls on your intranet and you need to open ports between the Azure AD Connect servers and your domain controllers, see [Azure AD Connect ports](reference-connect-ports.md) for more information. * If your proxy or firewall limit which URLs can be accessed, the URLs documented in [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) must be opened. Also see [Safelist the Azure portal URLs on your firewall or proxy server](../../azure-portal/azure-portal-safelist-urls.md?tabs=public-cloud). * If you're using the Microsoft cloud in Germany or the Microsoft Azure Government cloud, see [Azure AD Connect sync service instances considerations](reference-connect-instances.md) for URLs.
The minimum requirements for computers running AD FS or Web Application Proxy se
* Azure VM: A2 configuration or higher ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Bpanda Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bpanda-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Bpanda for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Bpanda.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 57e424f8-6fbc-4701-a312-899b562589ea
+++
+ na
+ms.devlang: na
+ Last updated : 03/05/2021+++
+# Tutorial: Configure Bpanda for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Bpanda and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Bpanda](http://www.mid.de) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Bpanda
+> * Remove users in Bpanda when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Bpanda
+> * Provision groups and group memberships in Bpanda
+> * Single sign-on to Bpanda (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A cloud subscription process space in Bpanda. For on-premises, see our installation documentation.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Bpanda](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Bpanda to support provisioning with Azure AD
+1. Reach out to support@mid.de for more information on your authentication Tenant URL.
+
+2. A client secret for further generating access tokens. This must have been transmitted to you in a secure way. Reach out to support@mid.de for more information.
+
+3. For establishing a successful connection between Azure AD and Bpanda, an access token must be retrieved in either of the following ways.
+
+Use this command on **Linux**
+```
+curl -u scim:{Your client secret} --location --request POST '{Your tenant specific authentication endpoint}/protocol/openid-connect/token' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data-urlencode 'grant_type=client_credentials'
+
+or this command using **PowerShell**
+
+$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("scim:{0}" -f {Your client secret})))
+$headers=@{}
+$headers.Add("Content-Type", "application/x-www-form-urlencoded")
+$headers.Add("Authorization", "Basic {0}" -f $base64AuthInfo)
+$response = Invoke-WebRequest -Uri "{Your tenant specific authentication endpoint}/protocol/openid-connect/token" -Method POST -Headers $headers -ContentType 'application/x-www-form-urlencoded' -Body 'grant_type=client_credentials'
+```
+
+This value will be entered in the **Secret Token** field in the Provisioning tab of your Bpanda application in the Azure portal.
++
+## Step 3. Add Bpanda from the Azure AD application gallery
+
+Add Bpanda from the Azure AD application gallery to start managing provisioning to Bpanda. If you have previously setup Bpanda for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Bpanda, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Bpanda
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Bpanda in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Bpanda**.
+
+ ![The Bpanda link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Bpanda Tenant URL in the format `{Your authentication endpoint}/scim/v2` and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Bpanda. If the connection fails, ensure your Bpanda account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Bpanda**.
+
+9. Review the user attributes that are synchronized from Azure AD to Bpanda in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Bpanda for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Bpanda API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ ||||
+ |userName|String|&check;
+ |active|Boolean|
+ |displayName|String|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |phoneNumbers[type eq "mobile"].value|String|
+ |externalId|String|
+ |title|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
++
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Bpanda**.
+
+11. Review the group attributes that are synchronized from Azure AD to Bpanda in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Bpanda for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ ||||
+ |displayName|String|&check;
+ |externalId|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Bpanda, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to Bpanda by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Cisco Spark Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cisco-spark-tutorial.md
Previously updated : 01/31/2020 Last updated : 02/17/2021
In this tutorial, you'll learn how to integrate Cisco Webex with Azure Active Di
* Enable your users to be automatically signed-in to Cisco Webex with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Cisco Webex supports **SP** initiated SSO.
-* Cisco Webex supports **Automated** user provisioning.
-* Once you configure Cisco Webex you can enforce Session Control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session Control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+* Cisco Webex supports [**Automated user provisioning**](https://docs.microsoft.com/azure/active-directory/saas-apps/cisco-webex-provisioning-tutorial).
## Adding Cisco Webex from the gallery To configure the integration of Cisco Webex into Azure AD, you need to add Cisco Webex from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Cisco Webex** in the search box. 1. Select **Cisco Webex** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Cisco Webex
+## Configure and test Azure AD SSO for Cisco Webex
Configure and test Azure AD SSO with Cisco Webex using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cisco Webex.
-To configure and test Azure AD SSO with Cisco Webex, complete the following building blocks:
+To configure and test Azure AD SSO with Cisco Webex, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Cisco Webex, complete the following buil
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Cisco Webex** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Cisco Webex** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
| | | | uid | user.userprincipalname |
-1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+ > [!NOTE]
+ > The source attribute value is by default mapped to userpricipalname. This can be changed to user.mail or user.onpremiseuserprincipalname or any other value as per the setting in Webex.
- ![The Certificate download link](common/metadataxml.png)
-1. On the **Set up Cisco Webex** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![The Certificate download link](common/metadataxml.png)
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Cisco Webex**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Cisco Webex
-1. To automate the configuration within Cisco Webex, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+1. Sign in to Cisco Webex with your administrator credentials.
+
+1. Select **Organization Settings** and under the **Authentication** section, click **Modify**.
- ![My apps extension](common/install-myappssecure-extension.png)
+ ![Screenshot shows Authentication Settings where you can select Modify.](./media/cisco-spark-tutorial/organization-settings.png)
+
+1. Select **Integrate a 3rd-party identity provider. (Advanced)** and click on **Next**.
-2. After adding extension to the browser, click on **Set up Cisco Webex** will direct you to the Cisco Webex application. From there, provide the admin credentials to sign into Cisco Webex. The browser extension will automatically configure the application for you and automate steps 3-8.
+ ![Screenshot shows Integrate a 3rd-party identity provider.](./media/cisco-spark-tutorial/enterprise-settings.png)
- ![Setup configuration](common/setup-sso.png)
+1. Click on **Download Metadata File** to download the **Service Provider Metadata file** and save it in your computer, click on **Next**.
-3. If you want to setup Cisco Webex manually, sign in to [Cisco Cloud Collaboration Management](https://admin.ciscospark.com/) with your full administrator credentials.
+ ![Screenshot shows Service Provider Metadata file.](./media/cisco-spark-tutorial/sp-metadata.png)
-4. Select **Settings** and under the **Authentication** section, click **Modify**.
+1. Click on **file browser** option to locate and upload the Azure AD metadata file. Then, select **Require certificate signed by a certificate authority in Metadata (more secure)** and click **Next**.
- ![Screenshot shows Authentication Settings where you can select Modify.](./media/cisco-spark-tutorial/tutorial-cisco-spark-10.png)
-
-5. Select **Integrate a 3rd-party identity provider. (Advanced)** and go to the next screen.
+ ![Screenshot shows Import I d P Metadata page.](./media/cisco-spark-tutorial/idp-metadata.png)
-6. On the **Import Idp Metadata** page, either drag and drop the Azure AD metadata file onto the page or use the file browser option to locate and upload the Azure AD metadata file. Then, select **Require certificate signed by a certificate authority in Metadata (more secure)** and click **Next**.
+1. Select **Test SSO Connection**, and when a new browser tab opens, authenticate with Azure AD by signing in.
- ![Screenshot shows Import I d P Metadata page.](./media/cisco-spark-tutorial/tutorial-cisco-spark-11.png)
+1. Return to the **Cisco Cloud Collaboration Management** browser tab. If the test was successful, select **This test was successful. Enable Single Sign-On option** and click **Next**.
-7. Select **Test SSO Connection**, and when a new browser tab opens, authenticate with Azure AD by signing in.
+1. Click **Save**.
-8. Return to the **Cisco Cloud Collaboration Management** browser tab. If the test was successful, select **This test was successful. Enable Single Sign-On option** and click **Next**.
+> [!NOTE]
+> To know more about how to configure the Cisco Webex, please refer to [this](https://help.webex.com/WBX000022701/How-Do-I-Configure-Microsoft-Azure-Active-Directory-Integration-with-Cisco-Webex-Through-Site-Administration#:~:text=In%20the%20Azure%20portal%2C%20select,in%20the%20Add%20Assignment%20dialog) page.
### Create Cisco Webex test user
-In this section, you create a user called B.Simon in Cisco Webex. In this section, you create a user called B.Simon in Cisco Webex.
+In this section, a user called B.Simon is created in Cisco Webex.This application supports automatic user provisioning, which enables automatic provisioning and deprovisioning based on your business rules. Microsoft recommends using automatic provisioning whenever possible. See how to enable auto provisioning for [Cisco Webex](https://docs.microsoft.com/azure/active-directory/saas-apps/cisco-webex-provisioning-tutorial).
-1. Go to the [Cisco Cloud Collaboration Management](https://admin.ciscospark.com/) with your full administrator credentials.
+If you need to create a user manually, perform the following steps:
+
+1. Sign in to Cisco Webex with your administrator credentials.
2. Click **Users** and then **Manage Users**.
- ![Screenshot shows the Users page where you can Manage Users.](./media/cisco-spark-tutorial/tutorial-cisco-spark-12.png)
+ ![Screenshot shows the Users page where you can Manage Users.](./media/cisco-spark-tutorial/user-1.png)
+
+3. In the **Manage Users** window, select **Manually Add or Modify Users**.
-3. In the **Manage User** window, select **Manually add or modify users** and click **Next**.
+ ![Screenshot shows the Users page where you can Manage Users and select Manually Add or Modify Users.](./media/cisco-spark-tutorial/user-2.png)
4. Select **Names and Email address**. Then, fill out the textbox as follows:
- ![Screenshot shows the Mange Users dialog box where you can manually add or modify users.](./media/cisco-spark-tutorial/tutorial-cisco-spark-13.png)
+ ![Screenshot shows the Mange Users dialog box where you can manually add or modify users.](./media/cisco-spark-tutorial/user-3.png)
a. In the **First Name** textbox, type first name of user like **B**.
In this section, you create a user called B.Simon in Cisco Webex. In this sectio
5. Click the plus sign to add B.Simon. Then, click **Next**.
-6. In the **Add Services for Users** window, click **Save** and then **Finish**.
+6. In the **Add Services for Users** window, click **Add Users** and then **Finish**.
## Test SSO
-When you select the Cisco Webex tile in the Access Panel, you should be automatically signed in to the Cisco Webex for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional Resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to Cisco Webex Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* Go to Cisco Webex Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the Cisco Webex tile in the My Apps, this will redirect to Cisco Webex Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try Cisco Webex with Azure AD](https://aad.portal.azure.com) -- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Cisco Webex with advanced visibility and controls](/cloud-app-security/protect-webex)
+Once you configure Cisco Webex you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory Cisco Webex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cisco-webex-tutorial.md
Previously updated : 08/21/2019 Last updated : 02/17/2021
In this tutorial, you'll learn how to integrate Cisco Webex Meetings with Azure
* Enable your users to be automatically signed-in to Cisco Webex Meetings with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Cisco Webex Meetings supports **SP and IDP** initiated SSO
+* Cisco Webex Meetings supports **SP and IDP** initiated SSO.
-* Cisco Webex Meetings supports **Just In Time** user provisioning
+* Cisco Webex Meetings supports **Just In Time** user provisioning.
## Adding Cisco Webex Meetings from the gallery To configure the integration of Cisco Webex Meetings into Azure AD, you need to add Cisco Webex Meetings from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Cisco Webex Meetings** in the search box. 1. Select **Cisco Webex Meetings** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Cisco Webex Meetings
+## Configure and test Azure AD SSO for Cisco Webex Meetings
Configure and test Azure AD SSO with Cisco Webex Meetings using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cisco Webex Meetings.
-To configure and test Azure AD SSO with Cisco Webex Meetings, complete the following building blocks:
+To configure and test Azure AD SSO with Cisco Webex Meetings, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-2. **[Configure Cisco Webex Meetings SSO](#configure-cisco-webex-meetings-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Cisco Webex Meetings test user](#create-cisco-webex-meetings-test-user)** - to have a counterpart of B.Simon in Cisco Webex Meetings that is linked to the Azure AD representation of user.
-3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+
+1. **[Configure Cisco Webex Meetings SSO](#configure-cisco-webex-meetings-sso)** - to configure the single sign-on settings on application side.
+ * **[Create Cisco Webex Meetings test user](#create-cisco-webex-meetings-test-user)** - to have a counterpart of B.Simon in Cisco Webex Meetings that is linked to the Azure AD representation of user.
+
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Cisco Webex Meetings** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Cisco Webex Meetings** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, you can configure the application in **IDP** initiated mode by uploading the **Service Provider metadata** file as follows:-
- a. Click **Upload metadata file**.
-
- b. Click on **folder logo** to select the metadata file and click **Upload**.
-
- c. After successful completion of uploading Service Provider metadata file the **Identifier** and **Reply URL** values get auto populated in **Basic SAML Configuration** section.
-
- >[!Note]
- >You will get the Service Provider Metadata file from **Configure Cisco Webex Meetings SSO** section, which is explained later in the tutorial.
+ 1. Click **Upload metadata file**.
+ 1. Click on **folder logo** to select the metadata file and click **Upload**.
+ 1. After successful completion of uploading Service Provider metadata file the **Identifier** and **Reply URL** values get auto populated in **Basic SAML Configuration** section.
+
+ > [!Note]
+ > You will get the Service Provider Metadata file from **Configure Cisco Webex Meetings SSO** section, which is explained later in the tutorial.
1. If you wish to configure the application in **SP** initiated mode, perform the following steps:
+ 1. On the **Basic SAML Configuration** section, click the edit/pen icon.
- a. On the **Basic SAML Configuration** section, click the edit/pen icon.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
- b. In the **Sign on URL** textbox, type the URL using the following pattern: `https://<customername>.my.webex.com`
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-5. Cisco Webex Meetings application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog.
+ 1. In the **Sign on URL** textbox, type the URL using the following pattern: `https://<customername>.my.webex.com`
- ![image](common/edit-attribute.png)
+1. Cisco Webex Meetings application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog.
-6. In addition to above, Cisco Webex Meetings application expects few more attributes to be passed back in SAML response. In the User Claims section on the User Attributes dialog, perform the following steps to add SAML token attribute as shown in the below table:
+ ![image](common/edit-attribute.png)
- | Name | Source Attribute|
- | | |
- | firstname | user.givenname |
- | lastname | user.surname |
- | email | user.mail |
- | uid | user.mail |
+1. In addition to above, Cisco Webex Meetings application expects few more attributes to be passed back in SAML response. In the User Claims section on the User Attributes dialog, perform the following steps to add SAML token attribute as shown in the below table:
- a. Click **Add new claim** to open the **Manage user claims** dialog.
+ | Name | Source Attribute|
+ | | |
+ | firstname | user.givenname |
+ | lastname | user.surname |
+ | email | user.mail |
+ | uid | user.mail |
- b. In the **Name** textbox, type the attribute name shown for that row.
+ 1. Click **Add new claim** to open the **Manage user claims** dialog.
+ 1. In the **Name** textbox, type the attribute name shown for that row.
+ 1. Leave the **Namespace** blank.
+ 1. Select Source as **Attribute**.
+ 1. From the **Source attribute** list, select the attribute value shown for that row from the drop-down list.
+ 1. Click **Save**.
- c. Leave the **Namespace** blank.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- d. Select Source as **Attribute**.
+ ![The Certificate download link](common/metadataxml.png)
- e. From the **Source attribute** list, select the attribute value shown for that row from the drop-down list.
+1. On the **Set up Cisco Webex Meetings** section, copy the appropriate URL(s) based on your requirement.
- f. Click **Save**.
-
-4. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-6. On the **Set up Cisco Webex Meetings** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
### Create an Azure AD test user
In this section, you'll create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. 1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Cisco Webex Meetings**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Cisco Webex Meetings SSO
-1. Go to `https://<customername>.webex.com/admin` URL with your administration credentials.
-
-2. Go to **Common Site Settings** and navigate to **SSO Configuration**.
-
- ![Screenshot shows Cisco Webex Administration with Common Site Settings and S S O Configuration selected.](./media/cisco-webex-tutorial/tutorial-cisco-webex-11.png)
-
-3. On the **Webex Administration** page, perform the following steps:
+1. Sign in to Cisco Webex Meetings with your administrator credentials.
+1. Go to **Common Site Settings** and navigate to **SSO Configuration**.
- ![Screenshot shows the Webex Administration page with the information described in this step.](./media/cisco-webex-tutorial/tutorial-cisco-webex-10.png)
+ ![Screenshot shows Cisco Webex Administration with Common Site Settings and S S O Configuration selected.](./media/cisco-webex-tutorial/tutorial-cisco-webex-11.png)
- a. select **SAML 2.0** as **Federation Protocol**.
+1. On the **Webex Administration** page, perform the following steps:
- b. Click on **Import SAML Metadata** link to upload the metadata file, which you have downloaded from Azure portal.
+ ![Screenshot shows the Webex Administration page with the information described in this step.](./media/cisco-webex-tutorial/tutorial-cisco-webex-10.png)
- c. Click on **Export** button to download the Service Provider Metadata file and upload it in the **Basic SAML Configuration** section on Azure portal.
+ 1. select **SAML 2.0** as **Federation Protocol**.
+ 1. Click on **Import SAML Metadata** link to upload the metadata file, which you have downloaded from Azure portal.
+ 1. Select **SSO Profile** as **IDP initiated** and click on **Export** button to download the Service Provider Metadata file and upload it in the **Basic SAML Configuration** section on Azure portal.
+ 1. In the **AuthContextClassRef** textbox, type one of the following values:
+ * `urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified`
+ * `urn:oasis:names:tc:SAML:2.0:ac:classes:Password`
+
+ To enable the MFA by using Azure AD, enter the two values like this:
+ `urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport;urn:oasis:names:tc:SAML:2.0:ac:classes:X509`
- d. In the **AuthContextClassRef** textbox, type `urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified` and if you want to enable the MFA using Azure AD type the two values like `urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport;urn:oasis:names:tc:SAML:2.0:ac:classes:X509`
+ 1. Select **Auto Account Creation**.
+
+ > [!NOTE]
+ > For enabling **just-in-time** user provisioning you need to check the **Auto Account Creation**. In addition to that SAML token attributes need to be passed in the SAML response.
- e. Select **Auto Account Creation**.
+ 1. Click **Save**.
- >[!NOTE]
- >For enabling **just-in-time** user provisioning you need to check the **Auto Account Creation**. In addition to that SAML token attributes need to be passed in the SAML response.
-
- f. Click **Save**.
-
- >[!NOTE]
- >This configuration is only for the customers that use Webex UserID in email format.
+ > [!NOTE]
+ > This configuration is only for the customers that use Webex UserID in email format.
+ >
+ > To learn more about how to configure the Cisco Webex meetings, see the [Webex documentation](https://help.webex.com/WBX000022701/How-Do-I-Configure-Microsoft-Azure-Active-Directory-Integration-with-Cisco-Webex-Through-Site-Administration#:~:text=In%20the%20Azure%20portal%2C%20select,in%20the%20Add%20Assignment%20dialog) page.
### Create Cisco Webex Meetings test user
The objective of this section is to create a user called B.Simon in Cisco Webex
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated
+
+* Click on **Test this application** in Azure portal. This will redirect to Cisco Webex Meetings Sign on URL where you can initiate the login flow.
+
+* Go to Cisco Webex Meetings Sign-on URL directly and initiate the login flow from there.
-When you click the Cisco Webex Meetings tile in the Access Panel, you should be automatically signed in to the Cisco Webex Meetings for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Cisco Webex Meetings for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Cisco Webex Meetings tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Cisco Webex Meetings for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try ServiceNow with Azure AD](https://aad.portal.azure.com)
+Once you configure Cisco Webex Meetings you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory Docusign Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/docusign-provisioning-tutorial.md
For more information on how to read the Azure AD provisioning logs, see [Reporti
## Troubleshooting Tips * Provisioning a role or permission profile for a user in Docusign can be accomplished by using an expression in your attribute mappings using the [switch](../app-provisioning/functions-for-customizing-application-data.md#switch) and [singleAppRoleAssignment](../app-provisioning/functions-for-customizing-application-data.md#singleapproleassignment) functions. For example, the expression below will provision the ID "8032066" when a user has the "DS Admin" role assigned in Azure AD. It will not provision any permission profile if the user isn't assigned a role on the Azure AD side. The ID can be retrieved from the DocuSign [portal](https://support.docusign.com/articles/Default-settings-for-out-of-the-box-DocuSign-Permission-Profiles).
-Switch(SingleAppRoleAssignment([appRoleAssignments])," ", "8032066", "DS Admin")
+Switch(SingleAppRoleAssignment([appRoleAssignments])," ", "DS Admin", "8032066")
## Additional resources * [Managing user account provisioning for Enterprise Apps](tutorial-list.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Configure Single Sign-on](docusign-tutorial.md)
+* [Configure Single Sign-on](docusign-tutorial.md)
active-directory Exceed Ai Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/exceed-ai-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Exceed.ai | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Exceed.ai.
++++++++ Last updated : 03/03/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Exceed.ai
+
+In this tutorial, you'll learn how to integrate Exceed.ai with Azure Active Directory (Azure AD). When you integrate Exceed.ai with Azure AD, you can:
+
+* Control in Azure AD who has access to Exceed.ai.
+* Enable your users to be automatically signed-in to Exceed.ai with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Exceed.ai single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Exceed.ai supports **SP** initiated SSO
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding Exceed.ai from the gallery
+
+To configure the integration of Exceed.ai into Azure AD, you need to add Exceed.ai from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Exceed.ai** in the search box.
+1. Select **Exceed.ai** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Exceed.ai
+
+Configure and test Azure AD SSO with Exceed.ai using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Exceed.ai.
+
+To configure and test Azure AD SSO with Exceed.ai, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Exceed.ai SSO](#configure-exceedai-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Exceed.ai test user](#create-exceedai-test-user)** - to have a counterpart of B.Simon in Exceed.ai that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Exceed.ai** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://prod.exceed.ai/saml/sp/discovery`
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Exceed.ai.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Exceed.ai**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Exceed.ai SSO
+
+To configure single sign-on on **Exceed.ai** side, you need to send the **App Federation Metadata Url** to [Exceed.ai support team](mailto:support@exceed.ai). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Exceed.ai test user
+
+In this section, you create a user called Britta Simon in Exceed.ai. Work with [Exceed.ai support team](mailto:support@exceed.ai) to add the users in the Exceed.ai platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Exceed.ai Sign-on URL where you can initiate the login flow.
+
+* Go to Exceed.ai Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Exceed.ai tile in the My Apps, this will redirect to Exceed.ai Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Exceed.ai you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Github Ae Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-ae-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both GitHub AE and Azur
> * Create users in GitHub AE > * Remove users in GitHub AE when they do not require access anymore > * Keep user attributes synchronized between Azure AD and GitHub AE
+> * Provision groups and group memberships in GitHub AE
> * Single sign-on to [Github AE](./github-ae-tutorial.md) (recommended) ## Prerequisites
Add GitHub AE from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and/or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and/or groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user and/or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to GitHub AE, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* When assigning users and groups to GitHub AE, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
* Start small. Test with a small set of users and/or groups before rolling out to everyone. When scope for provisioning is set to assigned users and/or groups, you can control this by assigning one or two users and/or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory Sendpro Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sendpro-enterprise-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with SendPro Enterprise | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and SendPro Enterprise.
++++++++ Last updated : 02/19/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with SendPro Enterprise
+
+In this tutorial, you'll learn how to integrate SendPro Enterprise with Azure Active Directory (Azure AD). When you integrate SendPro Enterprise with Azure AD, you can:
+
+* Control in Azure AD who has access to SendPro Enterprise.
+* Enable your users to be automatically signed-in to SendPro Enterprise with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SendPro Enterprise single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* SendPro Enterprise supports **SP** initiated SSO
+
+## Adding SendPro Enterprise from the gallery
+
+To configure the integration of SendPro Enterprise into Azure AD, you need to add SendPro Enterprise from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SendPro Enterprise** in the search box.
+1. Select **SendPro Enterprise** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for SendPro Enterprise
+
+Configure and test Azure AD SSO with SendPro Enterprise using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SendPro Enterprise.
+
+To configure and test Azure AD SSO with SendPro Enterprise, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SendPro Enterprise SSO](#configure-sendpro-enterprise-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SendPro Enterprise test user](#create-sendpro-enterprise-test-user)** - to have a counterpart of B.Simon in SendPro Enterprise that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **SendPro Enterprise** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<TENANT_NAME>.sendproenterprise.com`
+
+ > [!NOTE]
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [SendPro Enterprise Client support team](https://www.pitneybowes.com/us/support.html) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up SendPro Enterprise** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SendPro Enterprise.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SendPro Enterprise**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure SendPro Enterprise SSO
+
+To configure single sign-on on **SendPro Enterprise** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [SendPro Enterprise support team](https://www.pitneybowes.com/us/support.html). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create SendPro Enterprise test user
+
+In this section, you create a user called Britta Simon in SendPro Enterprise. Work with [SendPro Enterprise support team](https://www.pitneybowes.com/us/support.html) to add the users in the SendPro Enterprise platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to SendPro Enterprise Sign-on URL where you can initiate the login flow.
+
+* Go to SendPro Enterprise Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the SendPro Enterprise tile in the My Apps, this will redirect to SendPro Enterprise Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure SendPro Enterprise you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/enable-host-encryption.md
az extension update --name aks-preview
### Limitations -- Can only be enabled on new node pools or new clusters.
+- Can only be enabled on new node pools.
- Can only be enabled in [Azure regions][supported-regions] that support server-side encryption of Azure managed disks and only with specific [supported VM sizes][supported-sizes]. - Requires an AKS cluster and node pool based on Virtual Machine Scale Sets(VMSS) as *VM set type*.
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Previously updated : 7/17/2020 Last updated : 3/5/2021
Where `--enable-private-cluster` is a mandatory flag for a private cluster.
The following parameters can be leveraged to configure Private DNS Zone.
-1. "System" is the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.
-2. "None" means AKS will not create a Private DNS Zone. This requires you to Bring Your Own DNS Server and configure the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment.
-3. "Custom private dns zone name" should be in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` role to the custom private dns zone.
+- "System" is the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.
+- "None" means AKS will not create a Private DNS Zone. This requires you to Bring Your Own DNS Server and configure the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment.
+- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" requires you to create a Private DNS Zone in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` role.
+- "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
### Prerequisites
-* The AKS Preview version 0.4.71 or later
+* The AKS Preview version 0.5.3 or later
* The api version 2020-11-01 or later ### Create a private AKS cluster with Private DNS Zone (Preview) ```azurecli-interactive
-az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone [none|system|custom private dns zone ResourceId]
+az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone [system|none]
+```
+
+### Create a private AKS cluster with a Custom Private DNS Zone (Preview)
+
+```azurecli-interactive
+az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <custom private dns zone ResourceId> --fqdn-subdomain <subdomain-name>
``` ## Options for connecting to the private cluster
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
A workload may require splitting a cluster's nodes into separate pools for logic
#### Limitations * All subnets assigned to nodepools must belong to the same virtual network.
-* System pods must have access to all nodes in the cluster to provide critical functionality such as DNS resolution via coreDNS.
-* Assignment of a unique subnet per node pool is limited to Azure CNI during preview.
-* Using network policies with a unique subnet per node pool is not supported during preview.
+* System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
+* If you expand your VNET after creating the cluster you must update your cluster (perform any managed clster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. If you don't know how to reconcile your cluster file a support ticket.
+* Calico Network Policy is not supported.
+* Azure Network Policy is not supported.
+* Kube-proxy expects a single contiguous cidr and uses it this for three optmizations. See this [K.E.P.](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20191104-iptables-no-cluster-cidr.md ) and --cluster-cidr [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) for details. In azure cni your first node pool's subnet will be given to kube-proxy.
To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool.
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [vmss-commands]: ../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine [az-list-ips]: /cli/azure/vmss.md#az-vmss-list-instance-public-ips
-[reduce-latency-ppg]: reduce-latency-ppg.md
+[reduce-latency-ppg]: reduce-latency-ppg.md
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
app-service App Service Web Tutorial Custom Domain Uiex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-custom-domain-uiex.md
In this tutorial, you'll learn how to:
> * Map a wildcard domain by using a CNAME record. > * Redirect the default URL to a custom directory.
+<hr/>
+ ## 1. Prepare your environment * [Create an App Service app](./index.yml), or use an app that you created for another tutorial.
In this tutorial, you'll learn how to:
<details> <summary>What do I need to edit DNS records?</summary>
- Requires access to the DNS registry for your domain provider, such as GoDaddy. For example, to add DNS entries for contoso.com and www.contoso.com, you must be able to configure the DNS settings for the contoso.com root domain.
+ Requires access to the DNS registry for your domain provider, such as GoDaddy. For example, to add DNS entries for <code>contoso.com</code> and <code>www.contoso.com</code>, you must be able to configure the DNS settings for the <code>contoso.com</code> root domain.
</details>
+<hr/>
+ ## 2. Prepare the app To map a custom DNS name to an app, the app's <abbr title="Specifies the location, size, and features of the web server farm that hosts your app.">App Service plan</abbr> must be a paid tier (not <abbr title="An Azure App Service tier in which your app runs on the same VMs as other apps, including other customersΓÇÖ apps. This tier is intended for development and testing.">**Free (F1)**</abbr>). For more information, see [Azure App Service plan overview](overview-hosting-plans.md).
-### Sign in to Azure
+#### Sign in to Azure
Open the [Azure portal](https://portal.azure.com), and sign in with your Azure account.
-### Select the app in the Azure portal
+#### Select the app in the Azure portal
1. Search for and select **App Services**.
Open the [Azure portal](https://portal.azure.com), and sign in with your Azure a
<a name="checkpricing" aria-hidden="true"></a>
-### Check the pricing tier
+#### Check the pricing tier
1. In the left pane of the app page, scroll to the **Settings** section and select **Scale up (App Service plan)**.
Open the [Azure portal](https://portal.azure.com), and sign in with your Azure a
<a name="scaleup" aria-hidden="true"></a>
-### Scale up the App Service plan
+#### Scale up the App Service plan
1. Select any of the non-free tiers (**D1**, **B1**, **B2**, **B3**, or any tier in the **Production** category). For additional options, select **See additional options**.
Open the [Azure portal](https://portal.azure.com), and sign in with your Azure a
![Screenshot that shows the scale operation confirmation.](./media/app-service-web-tutorial-custom-domain/scale-notification.png)
+<hr/>
+ <a name="cname" aria-hidden="true"></a> ## 3. Get a domain verification ID
To add a custom domain to your app, you need to verify your ownership of the dom
<details> <summary>Why do I need this?</summary>
- Adding domain verification IDs to your custom domain can prevent dangling DNS entries and help to avoid subdomain takeovers. For custom domains you previously configured without this verification ID, you should protect them from the same risk by adding the verification ID to your DNS record. For more information on this common high-severity threat, see [Subdomain takeover](../security/fundamentals/subdomain-takeover.md).
+ Adding domain verification IDs to your custom domain can prevent dangling DNS entries and help to avoid subdomain takeovers. For custom domains you previously configured without this verification ID, you should protect them from the same risk by adding the verification ID to your DNS record. For more information on this common high-severity threat, see <a href="/azure/security/fundamentals/subdomain-takeover">Subdomain takeover</a>.
</details> <a name="info"></a>
-3. **(A record only) ** To map an <abbr title="An address record in DNS maps a hostname to an IP address.">A record</abbr>, you need the app's external IP address. In the **Custom domains** page, copy the value of **IP address**.
+3. **(A record only)** To map an <abbr title="An address record in DNS maps a hostname to an IP address.">A record</abbr>, you need the app's external IP address. In the **Custom domains** page, copy the value of **IP address**.
![Screenshot that shows portal navigation to an Azure app.](./media/app-service-web-tutorial-custom-domain/mapping-information.png)
+<hr/>
+ ## 4. Create the DNS records 1. Sign in to the website of your domain provider.
To add a custom domain to your app, you need to verify your ownership of the dom
For a subdomain like `www` in `www.contoso.com`, create two records according to the following table:
- | Record type | Host | Value | Comments |
- | - | - | - |
- | CNAME | `<subdomain>` (for example, `www`) | `<app-name>.azurewebsites.net` | The domain mapping itself. |
- | TXT | `asuid.<subdomain>` (for example, `asuid.www`) | [The verification ID you got earlier](#3-get-a-domain-verification-id) | App Service accesses the `asuid.<subdomain>` TXT record to verify your ownership of the custom domain. |
-
- ![Screenshot that shows the portal navigation to an Azure app.](./media/app-service-web-tutorial-custom-domain/cname-record.png)
+| Record type | Host | Value | Comments |
+| - | - | - |
+| CNAME | `<subdomain>` (for example, `www`) | `<app-name>.azurewebsites.net` | The domain mapping itself. |
+| TXT | `asuid.<subdomain>` (for example, `asuid.www`) | [The verification ID you got earlier](#3-get-a-domain-verification-id) | App Service accesses the `asuid.<subdomain>` TXT record to verify your ownership of the custom domain. |
+
+![Screenshot that shows the portal navigation to an Azure app.](./media/app-service-web-tutorial-custom-domain/cname-record.png)
# [A](#tab/a) For a root domain like `contoso.com`, create two records according to the following table:
- | Record type | Host | Value | Comments |
- | - | - | - |
- | A | `@` | IP address from [Copy the app's IP address](#3-get-a-domain-verification-id) | The domain mapping itself (`@` typically represents the root domain). |
- | TXT | `asuid` | [The verification ID you got earlier](#3-get-a-domain-verification-id) | App Service accesses the `asuid.<subdomain>` TXT record to verify your ownership of the custom domain. For the root domain, use `asuid`. |
-
- ![Screenshot that shows a DNS records page.](./media/app-service-web-tutorial-custom-domain/a-record.png)
+| Record type | Host | Value | Comments |
+| - | - | - |
+| A | `@` | IP address from [Copy the app's IP address](#3-get-a-domain-verification-id) | The domain mapping itself (`@` typically represents the root domain). |
+| TXT | `asuid` | [The verification ID you got earlier](#3-get-a-domain-verification-id) | App Service accesses the `asuid.<subdomain>` TXT record to verify your ownership of the custom domain. For the root domain, use `asuid`. |
- <details>
- <summary>What if I want to map a subdomain with an A record?</summary>
- To map a subdomain like `www.contoso.com` with an A record instead of a recommended CNAME record, your A record and TXT record should look like the following table instead:
+![Screenshot that shows a DNS records page.](./media/app-service-web-tutorial-custom-domain/a-record.png)
+
+<details>
+<summary>What if I want to map a subdomain with an A record?</summary>
+To map a subdomain like `www.contoso.com` with an A record instead of a recommended CNAME record, your A record and TXT record should look like the following table instead:
+
+<div class="table-scroll-wrapper"><table class="table"><caption class="visually-hidden">Table 3</caption>
+<thead>
+<tr>
+<th>Record type</th>
+<th>Host</th>
+<th>Value</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>A</td>
+<td><code>&lt;subdomain&gt;</code> (for example, <code>www</code>)</td>
+<td>IP address from <a href="#info" data-linktype="self-bookmark">Copy the app's IP address</a></td>
+</tr>
+<tr>
+<td>TXT</td>
+<td><code>asuid.&lt;subdomain&gt;</code> (for example, <code>asuid.www</code>)</td>
+<td><a href="#3-get-a-domain-verification-id" data-linktype="self-bookmark">The verification ID you got earlier</a></td>
+</tr>
+</tbody>
+</table></div>
+</details>
- | Record type | Host | Value |
- | - | - | - |
- | A | `<subdomain>` (for example, `www`) | IP address from [Copy the app's IP address](#info) |
- | TXT | `asuid.<subdomain>` (for example, `asuid.www`) | [The verification ID you got earlier](#3-get-a-domain-verification-id) |
- </details>
-
# [Wildcard (CNAME)](#tab/wildcard) For a wildcard name like `*` in `*.contoso.com`, create two records according to the following table:
- | Record type | Host | Value | Comments |
- | - | - | - |
- | CNAME | `*` | `<app-name>.azurewebsites.net` | The domain mapping itself. |
- | TXT | `asuid` | [The verification ID you got earlier](#3-get-a-domain-verification-id) | App Service accesses the `asuid` TXT record to verify your ownership of the custom domain. |
-
- ![Screenshot that shows the navigation to an Azure app.](./media/app-service-web-tutorial-custom-domain/cname-record-wildcard.png)
-
-
+| Record type | Host | Value | Comments |
+| - | - | - |
+| CNAME | `*` | `<app-name>.azurewebsites.net` | The domain mapping itself. |
+| TXT | `asuid` | [The verification ID you got earlier](#3-get-a-domain-verification-id) | App Service accesses the `asuid` TXT record to verify your ownership of the custom domain. |
- <details>
- <summary>My changes are erased after I leave the page.</summary>
- For certain providers, such as GoDaddy, changes to DNS records don't become effective until you select a separate **Save Changes** link.
- </details>
+![Screenshot that shows the navigation to an Azure app.](./media/app-service-web-tutorial-custom-domain/cname-record-wildcard.png)
+--
+
+<details>
+<summary>My changes are erased after I leave the page.</summary>
+<p>For certain providers, such as GoDaddy, changes to DNS records don't become effective until you select a separate <strong>Save Changes</strong> link.</p>
+</details>
+
+<hr/>
+ ## 5. Enable the mapping in your app 1. In the left pane of the app page in the Azure portal, select **Custom domains**.
For a wildcard name like `*` in `*.contoso.com`, create two records according to
A warning label for your custom domain means that it's not yet bound to a TLS/SSL certificate. Any HTTPS request from a browser to your custom domain will receive an error or warning, depending on the browser. To add a TLS binding, see <a href="https://docs.microsoft.com/azure/app-service/configure-ssl-bindings">Secure a custom DNS name with a TLS/SSL binding in Azure App Service</a>. </details> -
-
+--
+
+<hr/>
+ ## 6. Test in a browser Browse to the DNS names that you configured earlier.
Browse to the DNS names that you configured earlier.
</ul> </details>
+<hr/>
+ ## Migrate an active domain To migrate a live site and its DNS domain name to App Service with no downtime, see [Migrate an active DNS name to Azure App Service](manage-custom-dns-migrate-domain.md).
+<hr/>
+ <a name="virtualdir" aria-hidden="true"></a> ## Redirect to a custom directory
While this is a common scenario, it doesn't actually involve custom domain mappi
1. After the operation finishes, verify by navigating to your app's root path in the browser (for example, `http://contoso.com` or `http://<app-name>.azurewebsites.net`).
+<hr/>
+ ## Automate with scripts You can automate management of custom domains with scripts by using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/).
-### Azure CLI
+#### Azure CLI
The following command adds a configured custom DNS name to an App Service app.
az webapp config hostname add \
For more information, see [Map a custom domain to a web app](scripts/cli-configure-custom-domain.md).
-### Azure PowerShell
+#### Azure PowerShell
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
Set-AzWebApp `
For more information, see [Assign a custom domain to a web app](scripts/powershell-configure-custom-domain.md).
+<hr/>
+ ## Next steps Continue to the next tutorial to learn how to bind a custom TLS/SSL certificate to a web app.
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
app-service Quickstart Dotnetcore Uiex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-dotnetcore-uiex.md
When you're finished, you'll have an Azure <abbr title="A logical container for
<details> <summary>Already have Visual Studio 2019?</summary>
- If you've installed Visual Studio 2019 already:
+If you've installed Visual Studio 2019 already:
- - **Install the latest updates** in Visual Studio by selecting **Help** > **Check for Updates**. The latest updates contain the .NET 5.0 SDK.
- - **Add the workload** by selecting **Tools** > **Get Tools and Features**.
+<ul>
+<li><strong>Install the latest updates</strong> in Visual Studio by selecting <strong>Help</strong> &gt; <strong>Check for Updates</strong>. The latest updates contain the .NET 5.0 SDK.</li>
+<li><strong>Add the workload</strong> by selecting <strong>Tools</strong> &gt; <strong>Get Tools and Features</strong>.</li>
+</ul>
</details> <hr/>
http://<app_name>.azurewebsites.net
1. **Run** `az webapp up` to redeploy:
-```azurecli
-az webapp up --os-type linux
-```
-
-<details>
-<summary>What's <code>az webapp up</code> doing this time?</summary>
-The first time you ran the command, it saved the app name, resource group, and App Service plan in the <i>.azure/config</i> file from the project root. When you run it again from the project root, it uses the values saved in <i>.azure/config</i>, detects that the App Service resources already exists, and performs Zip deploy again.
-</details>
-
+ ```azurecli
+ az webapp up --os-type linux
+ ```
+
+ <details>
+ <summary>What's <code>az webapp up</code> doing this time?</summary>
+ The first time you ran the command, it saved the app name, resource group, and App Service plan in the <i>.azure/config</i> file from the project root. When you run it again from the project root, it uses the values saved in <i>.azure/config</i>, detects that the App Service resources already exists, and performs Zip deploy again.
+ </details>
+
1. Once deployment has completed, **hit refresh** in the browser window that previously opened.
-![Updated sample app running in Azure](media/quickstart-dotnetcore/dotnet-browse-azure-updated.png)
-
+ ![Updated sample app running in Azure](media/quickstart-dotnetcore/dotnet-browse-azure-updated.png)
+
[Having issues? Let us know.](https://aka.ms/DotNetAppServiceLinuxQuickStart) <hr/>
The first time you ran the command, it saved the app name, resource group, and A
1. The Overview page is where you can perform basic management tasks like browse, stop, start, restart, and delete. The left menu provides different pages for configuring your app.
-![App Service page in Azure portal](media/quickstart-dotnetcore/portal-app-overview-up.png)
-
+ ![App Service page in Azure portal](media/quickstart-dotnetcore/portal-app-overview-up.png)
+
<hr/> ## 9. Clean up resources
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-java-spring-cosmosdb.md
az group delete --name <your-azure-group-name>
[Azure for Java Developers](/java/azure/) [Spring Boot](https://spring.io/projects/spring-boot),
-[Spring Data for Cosmos DB](/java/azure/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db?view=azure-java-stable),
+[Spring Data for Cosmos DB](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db),
[Azure Cosmos DB](../cosmos-db/introduction.md) and [App Service Linux](overview.md).
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/tutorial-url-redirect-cli.md
Previously updated : 08/27/2020 Last updated : 03/05/2021 #Customer intent: As an IT administrator, I want to use Azure CLI to set up URL path redirection of web traffic to specific pools of servers so I can ensure my customers have access to the information they need.
az network application-gateway rule create \
In this example, you create three virtual machine scale sets that support the three backend pools that you created. The scale sets that you create are named *myvmss1*, *myvmss2*, and *myvmss3*. Each scale set contains two virtual machine instances on which you install NGINX.
-```azurecli-interactive
+Replace \<azure-user> and \<password> with a user name and password of your choice.
+
+```azurecli
for i in `seq 1 3`; do if [ $i -eq 1 ] then
for i in `seq 1 3`; do
--name myvmss$i \ --resource-group myResourceGroupAG \ --image UbuntuLTS \
- --admin-username azureuser \
- --admin-password Azure123456! \
+ --admin-username <azure-user> \
+ --admin-password <password> \
--instance-count 2 \ --vnet-name myVNet \ --subnet myBackendSubnet \
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/11/2021 Last updated : 03/05/2021
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
To set up a managed identity in the portal, you first create an application and
> [!NOTE]
- > In the case you want to use a **user-assigned managed identity**, be sure to specify the clientId when creating the [ManagedIdentityCredential](https://docs.microsoft.com/dotnet/api/azure.identity.managedidentitycredential?view=azure-dotnet&preserve-view=true).
+ > In the case you want to use a **user-assigned managed identity**, be sure to specify the clientId when creating the [ManagedIdentityCredential](https://docs.microsoft.com/dotnet/api/azure.identity.managedidentitycredential).
>``` >config.AddAzureAppConfiguration(options => > options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential(<your_clientId>)));
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-arc Delete Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-azure-resources.md
You can delete specific Azure Arc enabled data services resources using the Azur
To delete SQL managed instance resources from Azure using the Azure CLI replace the placeholder values in the command below and run it. ```azurecli
-az resource delete --name <sql instance name> --resource-type Microsoft.AzureData/sqlManagedInstances --resource-group <resource group name>
+az resource delete --name <sql instance name> --resource-type Microsoft.AzureArcData/sqlManagedInstances --resource-group <resource group name>
#Example
-#az resource delete --name sql1 --resource-type Microsoft.AzureData/sqlManagedInstances --resource-group rg1
+#az resource delete --name sql1 --resource-type Microsoft.AzureArcData/sqlManagedInstances --resource-group rg1
``` ### Delete PostgreSQL Hyperscale server group resources using the Azure CLI
az resource delete --name <sql instance name> --resource-type Microsoft.AzureDat
To delete a PostgreSQL Hyperscale server group resource from Azure using the Azure CLI replace the placeholder values in the command below and run it. ```azurecli
-az resource delete --name <postgresql instance name> --resource-type Microsoft.AzureData/postgresInstances --resource-group <resource group name>
+az resource delete --name <postgresql instance name> --resource-type Microsoft.AzureArcData/postgresInstances --resource-group <resource group name>
#Example
-#az resource delete --name pg1 --resource-type Microsoft.AzureData/postgresInstances --resource-group rg1
+#az resource delete --name pg1 --resource-type Microsoft.AzureArcData/postgresInstances --resource-group rg1
``` ### Delete Azure Arc data controller resources using the Azure CLI
az resource delete --name <postgresql instance name> --resource-type Microsoft.A
To delete an Azure Arc data controller from Azure using the Azure CLI replace the placeholder values in the command below and run it. ```azurecli
-az resource delete --name <data controller name> --resource-type Microsoft.AzureData/dataControllers --resource-group <resource group name>
+az resource delete --name <data controller name> --resource-type Microsoft.AzureArcData/dataControllers --resource-group <resource group name>
#Example
-#az resource delete --name dc1 --resource-type Microsoft.AzureData/dataControllers --resource-group rg1
+#az resource delete --name dc1 --resource-type Microsoft.AzureArcData/dataControllers --resource-group rg1
``` ### Delete a resource group using the Azure CLI
azure-arc View Billing Data In Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/view-billing-data-in-azure.md
# Upload billing data to Azure and view it in the Azure portal > [!IMPORTANT]
-> There is no cost to use Azure Arc enabled data services during the preview period. Although the billing system works end to end the billing meter is set to $0. If you follow this scenario, you will see entries in your billing for a service currently named **hybrid data services** and for resources of a type called **microsoft.AzureData/`<resource type>`**. You will be able to see a record for each data service - Azure Arc that you create, but each record will be billed for $0.
+> There is no cost to use Azure Arc enabled data services during the preview period. Although the billing system works end to end the billing meter is set to $0. If you follow this scenario, you will see entries in your billing for a service currently named **hybrid data services** and for resources of a type called **Microsoft.AzureArcData/`<resource type>`**. You will be able to see a record for each data service - Azure Arc that you create, but each record will be billed for $0.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
Follow these steps to view billing data in the Azure portal:
1. Make sure that your Scope is set to the subscription in which your data service resources were created. 1. Select **Cost by resource** in the View drop down next to the Scope selector near the top of the view. 1. Make sure the date filter is set to **This month** or some other time range that makes sense given the timing of when you created your data service resources.
-1. Click **Add filter** to add a filter by **Resource type** = `microsoft.azuredata/<data service type>` if you want to filter down to just one type of Azure Arc enabled data service.
+1. Click **Add filter** to add a filter by **Resource type** = `Microsoft.AzureArcData/<data service type>` if you want to filter down to just one type of Azure Arc enabled data service.
1. You will now see a list of all the resources that were created and uploaded to Azure. Since the billing meter is $0, you will see that the cost is always $0. ## Download billing data
You can validate the billing data files in the Azure portal.
7. Drill down into the generated folders and files and click on one of the generated .csv files. 8. Click the **Download** button which will save the file to your local Downloads folder. 9. Open the file using a .csv file viewer such as Excel.
-10. Filter the results to show only the rows with the **Resource Type** = `Microsoft.AzureData/<data service resource type`.
+10. Filter the results to show only the rows with the **Resource Type** = `Microsoft.AzureArcData/<data service resource type`.
11. You will see the number of hours the instance was used in the current 24 hour period in the UsageQuantity column.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-high-availability.md
Azure Cache for Redis implements high availability by using multiple VMs, called
| - | - | - | :: | :: | :: | | [Standard replication](#standard-replication)| Dual-node replicated configuration in a single datacenter with automatic failover | 99.9% |Γ£ö|Γ£ö|-| | [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.95% (Premium tier), 99.99% (Enterprise tiers) |-|Preview|Preview|
-| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | 99.999% (Enterprise tier) |-|Γ£ö|-|
+| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | 99.999% (Enterprise tier) |-|Γ£ö|Preview|
## Standard replication
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
This section describes the current state of the functional and behavioral differ
| Logging | [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) passed to the function | [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) obtained from `FunctionContext` | | Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | Not supported | | Output bindings | Out parameters | Return values |
-| Output binding types | `IAsyncCollector`, [DocumentClient](/dotnet/api/microsoft.azure.documents.client.documentclient?view=azure-dotnet&preserve-view=true), [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage?view=azure-dotnet&preserve-view=true), and other client-specific types | Simple types, JSON serializable types, and arrays. |
+| Output binding types | `IAsyncCollector`, [DocumentClient](/dotnet/api/microsoft.azure.documents.client.documentclient), [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage), and other client-specific types | Simple types, JSON serializable types, and arrays. |
| Multiple output bindings | Supported | [Supported](#multiple-output-bindings) | | HTTP trigger | [`HttpRequest`](/dotnet/api/microsoft.aspnetcore.http.httprequest?view=aspnetcore-5.0&preserve-view=true)/[`ObjectResult`](/dotnet/api/microsoft.aspnetcore.mvc.objectresult?view=aspnetcore-5.0&preserve-view=true) | `HttpRequestData`/`HttpResponseData` | | Durable Functions | [Supported](durable/durable-functions-overview.md) | Not supported |
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-bindings.md
The [Durable Functions](durable-functions-overview.md) extension introduces two
The orchestration trigger enables you to author [durable orchestrator functions](durable-functions-types-features-overview.md#orchestrator-functions). This trigger supports starting new orchestrator function instances and resuming existing orchestrator function instances that are "awaiting" a task.
-When you use the Visual Studio tools for Azure Functions, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute?view=azure-dotnet) .NET attribute.
+When you use the Visual Studio tools for Azure Functions, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute.
When you write orchestrator functions in scripting languages (for example, JavaScript or C# scripting), the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-instance-management.md
func durable get-history --id 0ab8c55a66644d68a3a8b220b12d209c
Rather than query one instance in your orchestration at a time, you might find it more efficient to query all of them at once.
-You can use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync?view=azure-dotnet#Microsoft_Azure_WebJobs_Extensions_DurableTask_IDurableOrchestrationClient_ListInstancesAsync_Microsoft_Azure_WebJobs_Extensions_DurableTask_OrchestrationStatusQueryCondition_System_Threading_CancellationToken_) (.NET), [getStatusAll](/javascript/api/durable-functions/durableorchestrationclient?view=azure-node-latest#getstatusall--) (JavaScript), or `get_status_all` (Python) method to query the statuses of all orchestration instances. In .NET, you can pass a `CancellationToken` object in case you want to cancel it. The method returns a list of objects that represent the orchestration instances matching the query parameters.
+You can use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync#Microsoft_Azure_WebJobs_Extensions_DurableTask_IDurableOrchestrationClient_ListInstancesAsync_Microsoft_Azure_WebJobs_Extensions_DurableTask_OrchestrationStatusQueryCondition_System_Threading_CancellationToken_) (.NET), [getStatusAll](/javascript/api/durable-functions/durableorchestrationclient#getstatusall--) (JavaScript), or `get_status_all` (Python) method to query the statuses of all orchestration instances. In .NET, you can pass a `CancellationToken` object in case you want to cancel it. The method returns a list of objects that represent the orchestration instances matching the query parameters.
# [C#](#tab/csharp)
func durable get-instances
What if you don't really need all the information that a standard instance query can provide? For example, what if you're just looking for the orchestration creation time, or the orchestration runtime status? You can narrow your query by applying filters.
-Use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync?view=azure-dotnet#Microsoft_Azure_WebJobs_Extensions_DurableTask_IDurableOrchestrationClient_ListInstancesAsync_Microsoft_Azure_WebJobs_Extensions_DurableTask_OrchestrationStatusQueryCondition_System_Threading_CancellationToken_) (.NET) or [getStatusBy](/javascript/api/durable-functions/durableorchestrationclient?view=azure-node-latest#getstatusby-dateundefined--dateundefined--orchestrationruntimestatus) (JavaScript) method to get a list of orchestration instances that match a set of predefined filters.
+Use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync#Microsoft_Azure_WebJobs_Extensions_DurableTask_IDurableOrchestrationClient_ListInstancesAsync_Microsoft_Azure_WebJobs_Extensions_DurableTask_OrchestrationStatusQueryCondition_System_Threading_CancellationToken_) (.NET) or [getStatusBy](/javascript/api/durable-functions/durableorchestrationclient#getstatusby-dateundefined--dateundefined--orchestrationruntimestatus) (JavaScript) method to get a list of orchestration instances that match a set of predefined filters.
# [C#](#tab/csharp)
azure-functions Durable Functions Monitor Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-monitor-python.md
The monitor pattern refers to a flexible recurring process in a workflow - for example, polling until certain conditions are met. This article explains a sample that uses Durable Functions to implement monitoring.
+## Prerequisites
+
+* [Complete the quickstart article](quickstart-python-vscode.md)
+* [Clone or download the samples project from GitHub](https://github.com/Azure/azure-functions-durable-python/tree/main/samples/)
+ ## Scenario overview
This article explains the following functions in the sample app:
### E3_Monitor orchestrator function
-# [Python](#tab/python)
The **E3_Monitor** function uses the standard *function.json* for orchestrator functions.
Here is the code that implements the function:
[!code-python[Main](~/samples-durable-functions-python/samples/monitor/E3_Monitor/\_\_init\_\_.py)] - This orchestrator function performs the following actions:
Multiple orchestrator instances can run simultaneously by calling the orchestrat
As with other samples, the helper activity functions are regular functions that use the `activityTrigger` trigger binding. The **E3_TooManyOpenIssues** function gets a list of currently open issues on the repo and determines if there are "too many" of them: more than 3 as per our sample.
-# [Python](#tab/python)
The *function.json* is defined as follows:
And here is the implementation.
[!code-python[Main](~/samples-durable-functions-python/samples/monitor/E3_TooManyOpenIssues/\_\_init\_\_.py)] - ### E3_SendAlert activity function The **E3_SendAlert** function uses the Twilio binding to send an SMS message notifying the end user that there are at least 3 open issues awaiting a resolution.
-# [Python](#tab/python)
Its *function.json* is simple:
And here is the code that sends the SMS message:
[!code-python[Main](~/samples-durable-functions-python/samples/monitor/E3_SendAlert/\_\_init\_\_.py)] - ## Run the sample
azure-functions Durable Functions Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-monitor.md
The monitor pattern refers to a flexible *recurring* process in a workflow - for example, polling until certain conditions are met. This article explains a sample that uses [Durable Functions](durable-functions-overview.md) to implement monitoring.
+## Prerequisites
+
+# [C#](#tab/csharp)
+
+* [Complete the quickstart article](durable-functions-create-first-csharp.md)
+* [Clone or download the samples project from GitHub](https://github.com/Azure/azure-functions-durable-extension/tree/main/samples/precompiled)
+
+# [JavaScript](#tab/javascript)
+
+* [Complete the quickstart article](quickstart-js-vscode.md)
+* [Clone or download the samples project from GitHub](https://github.com/Azure/azure-functions-durable-extension/tree/main/samples/javascript)
++ ## Scenario overview
Here is the code that implements the function:
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_Monitor/index.js)]
-# [Python](#tab/python)
-We have a different tutorial for the monitoring pattern on Python, please see it [here](durable-functions-monitor-python.md).
- This orchestrator function performs the following actions:
And here is the implementation.
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_GetIsClear/index.js)]
-# [Python](#tab/python)
-We have a different tutorial for the monitoring pattern on Python, please see it [here](durable-functions-monitor-python.md).
- ### E3_SendGoodWeatherAlert activity function
And here is the code that sends the SMS message:
[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_SendGoodWeatherAlert/index.js)]
-# [Python](#tab/python)
-We have a different tutorial for the monitoring pattern on Python, please see it [here](durable-functions-monitor-python.md).
- ## Run the sample
POST https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a
This sample has demonstrated how to use Durable Functions to monitor an external source's status using [durable timers](durable-functions-timers.md) and conditional logic. The next sample shows how to use external events and [durable timers](durable-functions-timers.md) to handle human interaction. > [!div class="nextstepaction"]
-> [Run the human interaction sample](durable-functions-phone-verification.md)
+> [Run the human interaction sample](durable-functions-phone-verification.md)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
Dictates whether editing in the Azure portal is enabled. Valid values are "readw
## FUNCTIONS\_EXTENSION\_VERSION
-The version of the Functions runtime to use in this function app. A tilde with major version means use the latest version of that major version (for example, "~2"). When new versions for the same major version are available, they are automatically installed in the function app. To pin the app to a specific version, use the full version number (for example, "2.0.12345"). Default is "~2". A value of `~1` pins your app to version 1.x of the runtime.
+The version of the Functions runtime that hosts your function app. A tilde (`~`) with major version means use the latest version of that major version (for example, "~3"). When new versions for the same major version are available, they are automatically installed in the function app. To pin the app to a specific version, use the full version number (for example, "3.0.12345"). Default is "~3". A value of `~1` pins your app to version 1.x of the runtime. For more information, see [Azure Functions runtime versions overview](functions-versions.md).
|Key|Sample value| |||
-|FUNCTIONS\_EXTENSION\_VERSION|~2|
+|FUNCTIONS\_EXTENSION\_VERSION|~3|
## FUNCTIONS\_V2\_COMPATIBILITY\_MODE
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
namespace CosmosDBSamplesV2
The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations. > [!NOTE]
-> You can also use the [IDocumentClient](/dotnet/api/microsoft.azure.documents.idocumentclient?view=azure-dotnet&preserve-view=true) interface to make testing easier.
+> You can also use the [IDocumentClient](/dotnet/api/microsoft.azure.documents.idocumentclient) interface to make testing easier.
```cs using Microsoft.AspNetCore.Http;
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md
Using this configuration, the function is now addressable with the following rou
http://<APP_NAME>.azurewebsites.net/api/products/electronics/357 ```
-This configuration allows the function code to support two parameters in the address, _category_ and _id_.
+This configuration allows the function code to support two parameters in the address, _category_ and _id_. For more information on how route parameters are tokenized in a URL, see [Routing in ASP.NET Core](https://docs.microsoft.com/aspnet/core/fundamentals/routing#route-constraint-reference).
# [C#](#tab/csharp)
If a function that uses the HTTP trigger doesn't complete within 230 seconds, th
## Next steps -- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)
+- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus-output.md
If you have `isSessionsEnabled` set to `true`, the `sessionHandlerOptions` will
|||| |prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.| |maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.|
-|autoComplete|true|Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver?view=azure-dotnet&preserve-view=true) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
+|autoComplete|true|Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.| |maxConcurrentSessions|2000|The maximum number of sessions that can be handled concurrently per scaled instance.|
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus-trigger.md
The following parameter types are available for the queue or topic message:
* `string` - If the message is text. * `byte[]` - Useful for binary data. * A custom type - If the message contains JSON, Azure Functions tries to deserialize the JSON data.
-* `BrokeredMessage` - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody?view=azure-dotnet#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1&preserve-view=true)
+* `BrokeredMessage` - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1)
method.
-* [`MessageReceiver`](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver?view=azure-dotnet&preserve-view=true) - Used to receive and acknowledge messages from the message container (required when [`autoComplete`](functions-bindings-service-bus-output.md#hostjson-settings) is set to `false`)
+* [`MessageReceiver`](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container (required when [`autoComplete`](functions-bindings-service-bus-output.md#hostjson-settings) is set to `false`)
These parameter types are for Azure Functions version 1.x; for 2.x and higher, use [`Message`](/dotnet/api/microsoft.azure.servicebus.message) instead of `BrokeredMessage`.
The following parameter types are available for the queue or topic message:
* `string` - If the message is text. * `byte[]` - Useful for binary data. * A custom type - If the message contains JSON, Azure Functions tries to deserialize the JSON data.
-* `BrokeredMessage` - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody?view=azure-dotnet#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1&preserve-view=true)
+* `BrokeredMessage` - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1)
method. These parameters are for Azure Functions version 1.x; for 2.x and higher, use [`Message`](/dotnet/api/microsoft.azure.servicebus.message) instead of `BrokeredMessage`.
Poison message handling can't be controlled or configured in Azure Functions. Se
The Functions runtime receives a message in [PeekLock mode](../service-bus-messaging/service-bus-performance-improvements.md#receive-mode). It calls `Complete` on the message if the function finishes successfully, or calls `Abandon` if the function fails. If the function runs longer than the `PeekLock` timeout, the lock is automatically renewed as long as the function is running.
-The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [OnMessageOptions.MaxAutoRenewDuration](/dotnet/api/microsoft.azure.servicebus.messagehandleroptions.maxautorenewduration?view=azure-dotnet&preserve-view=true). The maximum allowed for this setting is 5 minutes according to the Service Bus documentation, whereas you can increase the Functions time limit from the default of 5 minutes to 10 minutes. For Service Bus functions you wouldnΓÇÖt want to do that then, because youΓÇÖd exceed the Service Bus renewal limit.
+The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [OnMessageOptions.MaxAutoRenewDuration](/dotnet/api/microsoft.azure.servicebus.messagehandleroptions.maxautorenewduration). The maximum allowed for this setting is 5 minutes according to the Service Bus documentation, whereas you can increase the Functions time limit from the default of 5 minutes to 10 minutes. For Service Bus functions you wouldnΓÇÖt want to do that then, because youΓÇÖd exceed the Service Bus renewal limit.
## Message metadata
-The Service Bus trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code. These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message?view=azure-dotnet&preserve-view=true) class.
+The Service Bus trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code. These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message) class.
|Property|Type|Description| |--|-|--|
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-queue-trigger.md
For examples using these types, see [the GitHub repository for the extension](ht
# [Java](#tab/java)
-The [QueueTrigger](/java/api/com.microsoft.azure.functions.annotation.queuetrigger?view=azure-java-stable&preserve-view=true) annotation gives you access to the queue message that triggered the function.
+The [QueueTrigger](/java/api/com.microsoft.azure.functions.annotation.queuetrigger) annotation gives you access to the queue message that triggered the function.
# [JavaScript](#tab/javascript)
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-table-output.md
Alternatively you can use a `CloudTable` method parameter to write to the table
# [Java](#tab/java)
-There are two options for outputting a Table storage row from a function by using the [TableStorageOutput](/java/api/com.microsoft.azure.functions.annotation.tableoutput?view=azure-java-stablet&preserve-view=true) annotation:
+There are two options for outputting a Table storage row from a function by using the [TableStorageOutput](/java/api/com.microsoft.azure.functions.annotation.tableoutput) annotation:
- **Return value**: By applying the annotation to the function itself, the return value of the function is persisted as a Table storage row.
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-timer.md
Expressed as a string, the `TimeSpan` format is `hh:mm:ss` when `hh` is less tha
|--|-| | "01:00:00" | every hour | | "00:01:00" | every minute |
-| "24:00:00" | every 24 days |
+| "25:00:00" | every 25 days |
| "1.00:00:00" | every day | ## Scale-out
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-vnet.md
Title: Integrate Azure Functions with an Azure virtual network
-description: A step-by-step tutorial that shows you how to connect a function to an Azure virtual network
+ Title: Use private endpoints to integrate Azure Functions with a virtual network
+description: A step-by-step tutorial that shows you how to connect a function to an Azure virtual network and lock it down with private endpoints
Previously updated : 4/23/2020 Last updated : 2/22/2021
-#Customer intent: As an enterprise developer, I want create a function that can connect to a virtual network so that I can manage a WordPress app running on a VM in the virtual network.
+#Customer intent: As an enterprise developer, I want to create a function that can connect to a virtual network with private endpoints to secure my function app.
-# Tutorial: integrate Functions with an Azure virtual network
+# Tutorial: Integrate Azure Functions with an Azure virtual network using private endpoints
-This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network. you'll create a function that has access to both the internet and to a VM running WordPress in virtual network.
+This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network with private endpoints. You'll create a function with a storage account locked behind a virtual network that uses a service bus queue trigger.
> [!div class="checklist"] > * Create a function app in the Premium plan
-> * Deploy a WordPress site to VM in a virtual network
-> * Connect the function app to the virtual network
-> * Create a function proxy to access WordPress resources
-> * Request a WordPress file from inside the virtual network
+> * Create Azure resources (Service Bus, Storage Account, Virtual Network)
+> * Lock down your Storage account behind a private endpoint
+> * Lock down your Service Bus behind a private endpoint
+> * Deploy a function app with both Service Bus and HTTP triggers.
+> * Lock down your function app behind a private endpoint
+> * Test to see that your function app is secure behind the virtual network
+> * Clean up resources
-## Topology
+## Create a function app in a Premium plan
-The following diagram shows the architecture of the solution that you create:
+First, you create a .NET function app in the [Premium plan] as this tutorial will use C#. Other languages are also supported in Windows. This plan provides serverless scale while supporting virtual network integration.
- ![UI for virtual network integration](./media/functions-create-vnet/topology.png)
+1. From the Azure portal menu or the **Home** page, select **Create a resource**.
-Functions running in the Premium plan have the same hosting capabilities as web apps in Azure App Service, which includes the VNet Integration feature. To learn more about VNet Integration, including troubleshooting and advanced configuration, see [Integrate your app with an Azure virtual network](../app-service/web-sites-integrate-with-vnet.md).
+1. In the **New** page, select **Compute** > **Function App**.
-## Prerequisites
+1. On the **Basics** page, use the function app settings as specified in the following table:
-For this tutorial, it's important that you understand IP addressing and subnetting. You can start with [this article that covers the basics of addressing and subnetting](https://support.microsoft.com/help/164015/understanding-tcp-ip-addressing-and-subnetting-basics). Many more articles and videos are available online.
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Subscription** | Your subscription | The subscription under which this new function app is created. |
+ | **[Resource Group](../azure-resource-manager/management/overview.md)** | *myResourceGroup* | Name for the new resource group in which to create your function app. |
+ | **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. |
+ |**Publish**| Code | Option to publish code files or a Docker container. |
+ | **Runtime stack** | .NET | This tutorial uses .NET |
+ |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services your functions access. |
-If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+1. Select **Next: Hosting**. On the **Hosting** page, enter the following settings:
-## Create a function app in a Premium plan
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](./storage-considerations.md#storage-account-requirements). |
+ |**Operating system**| Windows | This tutorial uses Windows |
+ | **[Plan](./functions-scale.md)** | Premium | Hosting plan that defines how resources are allocated to your function app. Select **Premium**. By default, a new App Service plan is created. The default **Sku and size** is **EP1**, where EP stands for _elastic premium_. To learn more, see the [list of Premium SKUs](./functions-premium-plan.md#available-instance-skus).<br/>When running JavaScript functions on a Premium plan, you should choose an instance that has fewer vCPUs. For more information, see [Choose single-core Premium plans](./functions-reference-node.md#considerations-for-javascript-functions). |
+
+1. Select **Next: Monitoring**. On the **Monitoring** page, enter the following settings:
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **[Application Insights](./functions-monitoring.md)** | Default | Creates an Application Insights resource of the same *App name* in the nearest supported region. By expanding this setting, you can change the **New resource name** or choose a different **Location** in an [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) to store your data. |
+
+1. Select **Review + create** to review the app configuration selections.
+
+1. On the **Review + create** page, review your settings, and then select **Create** to provision and deploy the function app.
+
+1. Select the **Notifications** icon in the upper-right corner of the portal and watch for the **Deployment succeeded** message.
+
+1. Select **Go to resource** to view your new function app. You can also select **Pin to dashboard**. Pinning makes it easier to return to this function app resource from your dashboard.
+
+1. Congratulations! You've successfully created your premium function app!
+
+## Create Azure resources
+
+### Create a storage account
+
+A separate storage account from the one created in the initial creation of your function app is required for virtual networks.
+
+1. From the Azure portal menu or the **Home** page, select **Create a resource**.
+
+1. In the New page, search for **Storage Account** and select **Create**
-First, you create a function app in the [Premium plan]. This plan provides serverless scale while supporting virtual network integration.
+1. On the **Basics** tab, set the settings as specified in the table below. The rest can be left as default:
+ | Setting | Suggested value | Description |
+ | | - | - |
+ | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
+ | **Name** | mysecurestorage| The name of your storage account to which the private endpoint will be applied to. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your function app in. |
+
+1. Select **Review + create**. After validation completes, select **Create**.
+
+### Create a Service Bus
+
+1. From the Azure portal menu or the **Home** page, select **Create a resource**.
+
+1. In the New page, search for **Service Bus** and select **Create**.
-You can pin the function app to the dashboard by selecting the pin icon in the upper right-hand corner. Pinning makes it easier to return to this function app after you create your VM.
+1. On the **Basics** tab, set the settings as specified in the table below. The rest can be left as default:
-## Create a VM inside a virtual network
+ | Setting | Suggested value | Description |
+ | | - | - |
+ | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
+ | **Name** | myServiceBus| The name of your service bus to which the private endpoint will be applied to. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your function app in. |
+ | **Pricing tier** | Premium | Choose this tier to use private endpoints with Service Bus. |
+
+1. Select **Review + create**. After validation completes, select **Create**.
+
+### Create a virtual network
-Next, create a preconfigured VM that runs WordPress inside a virtual network ([WordPress LEMP7 Max Performance](https://jetware.io/appliances/jetware/wordpress4_lemp7-170526/profile?us=azure) by Jetware). A WordPress VM is used because of its low cost and convenience. This same scenario works with any resource in a virtual network, such as REST APIs, App Service Environments, and other Azure services.
+Azure resources in this tutorial either integrate with or are placed within a virtual network. You'll use private endpoints to keep network traffic contained with the virtual network.
-1. In the portal, choose **+ Create a resource** on the left navigation pane, in the search field type `WordPress LEMP7 Max Performance`, and press Enter.
+The tutorial creates two subnets:
+- **default**: Subnet for private endpoints. Private IP addresses are given from this subnet.
+- **functions**: Subnet for Azure Functions virtual network integration. This subnet is delegated to the function app.
-1. Choose **Wordpress LEMP Max Performance** in the search results. Select a software plan of **Wordpress LEMP Max Performance for CentOS** as the **Software Plan** and select **Create**.
+Now, create the virtual network to which the function app integrates.
-1. In the **Basics** tab, use the VM settings as specified in the table below the image:
+1. From the Azure portal menu or the Home page, select **Create a resource**.
- ![Basics tab for creating a VM](./media/functions-create-vnet/create-vm-1.png)
+1. In the New page, search for **Virtual Network** and select **Create**.
+
+1. On the **Basics** tab, use the virtual network settings as specified below:
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose `myResourceGroup`, or the resource group you created with your function app. Using the same resource group for the function app, WordPress VM, and hosting plan makes it easier to clean up resources when you're done with this tutorial. |
- | **Virtual machine name** | VNET-Wordpress | The VM name needs to be unique in the resource group |
- | **[Region](https://azure.microsoft.com/regions/)** | (Europe) West Europe | Choose a region near you or near the functions that access the VM. |
- | **Size** | B1s | Choose **Change size** and then select the B1s standard image, which has 1 vCPU and 1 GB of memory. |
- | **Authentication type** | Password | To use password authentication, you must also specify a **Username**, a secure **Password**, and then **Confirm password**. For this tutorial, you won't need to sign in to the VM unless you need to troubleshoot. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
+ | **Name** | myVirtualNet| The name of your virtual network to which your function app will connect. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your function app in. |
+
+1. On the **IP Addresses** tab, select **Add subnet**. Use the settings as specified below when adding a subnet:
+
+ :::image type="content" source="./media/functions-create-vnet/1-create-vnet-ip-address.png" alt-text="Screenshot of the create virtual network configuration view.":::
+
+ | Setting | Suggested value | Description |
+ | | - | - |
+ | **Subnet name** | functions | The name of the subnet your function app will connect to. |
+ | **Subnet address range** | 10.0.1.0/24 | Notice our IPv4 address space in the image above is 10.0.0.0/16. If the above was 10.1.0.0/16, the recommended *Subnet address range* would be 10.1.1.0/24. |
+
+1. Select **Review + create**. After validation completes, select **Create**.
+
+## Lock down your storage account with private endpoints
+
+Azure Private Endpoints are used to connect to specific Azure resources using a private IP address. This connection ensures that network traffic remains within the chosen virtual network, and access is available only for specific resources. Now, create the private endpoints for Azure File storage and Azure Blob storage with your storage account.
+
+1. In your new storage account, select **Networking** in the left menu.
+
+1. Select the **Private endpoint connections** tab, and select **Private endpoint**.
+
+ :::image type="content" source="./media/functions-create-vnet/2-navigate-private-endpoint-store.png" alt-text="Screenshot of how to navigate to create private endpoints for the storage account.":::
+
+1. On the **Basics** tab, use the private endpoint settings as specified below:
+
+ | Setting | Suggested value | Description |
+ | | - | - |
+ | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. | |
+ | **Name** | file-endpoint | The name of the private endpoint for files from your storage account. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your storage account in. |
+
+1. On the **Resource** tab, use the private endpoint settings as specified below:
+
+ | Setting | Suggested value | Description |
+ | | - | - |
+ | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **Resource type** | Microsoft.Storage/storageAccounts | This is the resource type for storage accounts. |
+ | **Resource** | mysecurestorage | The storage account you just created |
+ | **Target sub-resource** | file | This private endpoint will be used for files from the storage account. |
+
+1. On the **Configuration** tab, choose **default** for the Subnet setting.
+
+1. Select **Review + create**. After validation completes, select **Create**. Resources in the virtual network can now talk to storage files.
+
+1. Create another private endpoint for blobs. For the **Resources** tab, use the below settings. For all other settings, use the same settings from the file private endpoint creation steps you just followed.
+
+ | Setting | Suggested value | Description |
+ | | - | - |
+ | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **Resource type** | Microsoft.Storage/storageAccounts | This is the resource type for storage accounts. |
+ | **Resource** | mysecurestorage | The storage account you just created |
+ | **Target sub-resource** | blob | This private endpoint will be used for blobs from the storage account. |
+
+## Lock down your service bus with a private endpoint
-1. Choose the **Networking** tab and under Configure virtual networks select **Create new**.
+Now, create the private endpoint for your Azure Service Bus.
-1. In **Create virtual network**, use the settings in the table below the image:
+1. In your new service bus, select **Networking** in the left menu.
- ![Networking tab of create VM](./media/functions-create-vnet/create-vm-2.png)
+1. Select the **Private endpoint connections** tab, and select **Private endpoint**.
+
+ :::image type="content" source="./media/functions-create-vnet/3-navigate-private-endpoint-service-bus.png" alt-text="Screenshot of how to navigate to private endpoints for service bus.":::
+
+1. On the **Basics** tab, use the private endpoint settings as specified below:
+
+ | Setting | Suggested value | Description |
+ | | - | - |
+ | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
+ | **Name** | sb-endpoint | The name of the private endpoint for files from your storage account. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your storage account in. |
+
+1. On the **Resource** tab, use the private endpoint settings as specified below:
| Setting | Suggested value | Description | | | - | - |
- | **Name** | myResourceGroup-vnet | You can use the default name generated for your virtual network. |
- | **Address range** | 10.10.0.0/16 | Use a single address range for the virtual network. |
- | **Subnet name** | Tutorial-Net | Name of the subnet. |
- | **Address range** (subnet) | 10.10.1.0/24 | The subnet size defines how many interfaces can be added to the subnet. This subnet is used by the WordPress site. A `/24` subnet provides 254 host addresses. |
+ | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **Resource type** | Microsoft.ServiceBus/namespaces | This is the resource type for Service Bus. |
+ | **Resource** | myServiceBus | The Service Bus you created earlier in the tutorial. |
+ | **Target subresource** | namespace | This private endpoint will be used for the namespace from the service bus. |
+
+1. On the **Configuration** tab, choose **default** for the Subnet setting.
+
+1. Select **Review + create**. After validation completes, select **Create**. Resources in the virtual network can now talk to service bus.
+
+## Create a file share
+
+1. In the storage account you created, select **File shares** in the left menu.
+
+1. Select **+ File shares**. Provide **files** as the name for the file share for the purposes of this tutorial.
+
+ :::image type="content" source="./media/functions-create-vnet/4-create-file-share.png" alt-text="Screenshot of how to create a file share in the storage account.":::
-1. Select **OK** to create the virtual network.
+## Get storage account connection string
-1. Back in the **Networking** tab, choose **None** for **Public IP**.
+1. In the storage account you created, select **Access keys** in the left menu.
-1. Choose the **Management** tab, then in **Diagnostics storage account**, choose the Storage account you created with your function app.
+1. Select **Show keys**. Copy the connection string of key1, and save it. We'll need this connection string later when configuring the app settings.
-1. Select **Review + create**. After validation completes, select **Create**. The VM create process takes a few minutes. The created VM can only access the virtual network.
+ :::image type="content" source="./media/functions-create-vnet/5-get-store-connection-string.png" alt-text="Screenshot of how to get a storage account connection string.":::
-1. After the VM is created, choose **Go to resource** to view the page for your new VM, then choose **Networking** under **Settings**.
+## Create a queue
-1. Verify that there's no **Public IP**. Make a note the **Private IP**, which you use to connect to the VM from your function app.
+This will be the queue for which your Azure Functions Service Bus Trigger will get events from.
- ![Networking settings in the VM](./media/functions-create-vnet/vm-networking.png)
+1. In your service bus, select **Queues** in the left menu.
-You now have a WordPress site deployed entirely within your virtual network. This site isn't accessible from the public internet.
+1. Select **Shared access policies**. Provide **queue** as the name for the queue for the purposes of this tutorial.
-## Connect your function app to the virtual network
+ :::image type="content" source="./media/functions-create-vnet/6-create-queue.png" alt-text="Screenshot of how to create a service bus queue.":::
-With a WordPress site running in a VM in a virtual network, you can now connect your function app to that virtual network.
+## Get service bus connection string
-1. In your new function app, select **Networking** in the left menu.
+1. In your service bus, select **Shared access policies** in the left menu.
-1. Under **VNet Integration**, select **Click here to configure**.
+1. Select **RootManageSharedAccessKey**. Copy the **Primary Connection String**, and save it. We'll need this connection string later when configuring the app settings.
- :::image type="content" source="./media/functions-create-vnet/networking-0.png" alt-text="Choose networking in the function app":::
+ :::image type="content" source="./media/functions-create-vnet/7-get-service-bus-connection-string.png" alt-text="Screenshot of how to get a service bus connection string.":::
-1. On the **VNET Integration** page, select **Add VNet**.
+## Integrate function app with your virtual network
- :::image type="content" source="./media/functions-create-vnet/networking-2.png" alt-text="Add the VNet Integration preview":::
+To use your function app with virtual networks, you'll need to join it to a subnet. We use a specific subnet for the Azure Functions virtual network integration and the default sub net for all other private endpoints created in this tutorial.
-1. In **Network Feature Status**, use the settings in the table below the image:
+1. In your function app, select **Networking** in the left menu.
- ![Define the function app virtual network](./media/functions-create-vnet/networking-3.png)
+1. Select **Click here to configure** under VNet Integration.
+
+ :::image type="content" source="./media/functions-create-vnet/8-connect-app-vnet.png" alt-text="Screenshot of how to navigate to virtual network integration.":::
+
+1. Select **Add VNet**
+
+1. In the blade that opens up under **Virtual Network**, select the virtual network you created earlier.
+
+1. Select the **Subnet** we created earlier called **functions**. Your function app is now integrated with your virtual network!
+
+ :::image type="content" source="./media/functions-create-vnet/9-connect-app-subnet.png" alt-text="Screenshot of how to connect a function app to a subnet.":::
+
+## Configure your function app settings for private endpoints
+
+1. In your function app, select **Configuration** from the left menu.
+
+1. To use your function app with virtual networks, the following app settings will need to be updated. Select **+ New application setting** or the pencil by **Edit** in the rightmost column of the app settings table as appropriate. When done, select **Save**.
+
+ :::image type="content" source="./media/functions-create-vnet/10-configure-app-settings.png" alt-text="Screenshot of how to configure function app settings for private endpoints.":::
| Setting | Suggested value | Description | | | - | - |
- | **Virtual Network** | MyResourceGroup-vnet | This virtual network is the one you created earlier. |
- | **Subnet** | Create New Subnet | Create a subnet in the virtual network for your function app to use. VNet Integration must be configured to use an empty subnet. It doesn't matter that your functions use a different subnet than your VM. The virtual network automatically routes traffic between the two subnets. |
- | **Subnet name** | Function-Net | Name of the new subnet. |
- | **Virtual network address block** | 10.10.0.0/16 | Choose the same address block used by the WordPress site. You should only have one address block defined. |
- | **Address range** | 10.10.2.0/24 | The subnet size restricts the total number of instances that your Premium plan function app can scale out to. This example uses a `/24` subnet with 254 available host addresses. This subnet is over-provisioned, but easy to calculate. |
+ | **AzureWebJobsStorage** | mysecurestorageConnectionString | The connection string of the storage account you created. This is the storage connection string from [Get storage account connection string](#get-storage-account-connection-string). By changing this setting, your function app will now use the secure storage account for normal operations at runtime. |
+ | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | mysecurestorageConnectionString | The connection string of the storage account you created. By changing this setting, your function app will now use the secure storage account for Azure Files, which are used when deploying. |
+ | **WEBSITE_CONTENTSHARE** | files | The name of the file share you created in the storage account. This app setting is used in conjunction with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. |
+ | **SERVICEBUS_CONNECTION** | myServiceBusConnectionString | Create an app setting for the connection string of your service bus. This is the storage connection string from [Get service bus connection string](#get-service-bus-connection-string).|
+ | **WEBSITE_CONTENTOVERVNET** | 1 | Create this app setting. A value of 1 enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. |
+ | **WEBSITE_DNS_SERVER** | 168.63.129.16 | Create this app setting. Once your app integrates with a virtual network, it will use the same DNS server as the virtual network. This is one of two settings needed have your function app work with Azure DNS private zones and are required when using private endpoints. These settings will send all outbound calls from your app into your virtual network. |
+ | **WEBSITE_VNET_ROUTE_ALL** | 1 | Create this app setting. Once your app integrates with a virtual network, it will use the same DNS server as the virtual network. This is one of two settings needed have your function app work with Azure DNS private zones and are required when using private endpoints. These settings will send all outbound calls from your app into your virtual network. |
+
+1. Staying on the **Configuration** view, select the **Function runtime settings** tab.
+
+1. Set **Runtime Scale Monitoring** to **On**, and select **Save**. Runtime driven scaling allows you to connect non-HTTP trigger functions to services running inside your virtual network.
-1. Select **OK** to add the subnet. Close the **VNet Integration** and **Network Feature Status** pages to return to your function app page.
+ :::image type="content" source="./media/functions-create-vnet/11-enable-runtime-scaling.png" alt-text="Screenshot of how to enable Runtime Driven Scaling for Azure Functions.":::
-The function app can now access the virtual network where the WordPress site is running. Next, you use [Azure Functions Proxies](functions-proxies.md) to return a file from the WordPress site.
+## Deploy a service bus trigger and http trigger to your function app
-## Create a proxy to access VM resources
+1. In GitHub, browse to the following sample repository, which contains a function app with two functions, an HTTP Trigger and a Service Bus Queue Trigger.
-With VNet Integration enabled, you can create a proxy in your function app to forward requests to the VM running in the virtual network.
+ <https://github.com/Azure-Samples/functions-vnet-tutorial>
-1. In your function app, select **Proxies** from the left menu, and then select **Add**. Use the proxy settings in the table below the image:
+1. At the top of the page, select the **Fork** button to create a fork of this repository in your own GitHub account or organization.
- :::image type="content" source="./media/functions-create-vnet/create-proxy.png" alt-text="Define the proxy settings":::
+1. In your function app, select **Deployment Center** from the left menu. Then, select **Settings**.
- | Setting | Suggested value | Description |
- | -- | - | - |
- | **Name** | Plant | The name can be any value. It's used to identify the proxy. |
- | **Route Template** | /plant | Route that maps to a VM resource. |
- | **Backend URL** | http://<YOUR_VM_IP>/wp-content/themes/twentyseventeen/assets/images/header.jpg | Replace `<YOUR_VM_IP>` with the IP address of your WordPress VM that you created earlier. This mapping returns a single file from the site. |
+1. On the **Settings** tab, use the deployment settings as specified below:
+
+ | Setting | Suggested value | Description |
+ | | - | - |
+ | **Source** | GitHub | You should have created a GitHub repo with the sample code in step 2. |
+ | **Organization** | myOrganization | This is the organization your repo is checked into, usually your account. |
+ | **Repository** | myRepo | The repo you created with the sample code. |
+ | **Branch** | main | This is the repo you just created, so use the main branch. |
+ | **Runtime stack** | .NET | The sample code is in C#. |
-1. Select **Create** to add the proxy to your function app.
+1. Select **Save**.
-## Try it out
+ :::image type="content" source="./media/functions-create-vnet/12-deploy-portal.png" alt-text="Screenshot of how to deploy Azure Functions code through the portal.":::
-1. In your browser, try to access the URL you used as the **Backend URL**. As expected, the request times out. A timeout occurs because your WordPress site is connected only to your virtual network and not the internet.
+1. Your initial deployment may take a few minutes. You will see a **Success (Active)** Status message in the **Logs** tab when your app is successfully deployed. If needed, refresh the page.
-1. Copy the **Proxy URL** value from your new proxy and paste it into the address bar of your browser. The returned image is from the WordPress site running inside your virtual network.
+1. Congratulations! You have successfully deployed your sample function app.
- ![Plant image file returned from the WordPress site](./media/functions-create-vnet/plant.png)
+## Lock down your function app with a private endpoint
-Your function app is connected to both the internet and your virtual network. The proxy is receiving a request over the public internet, and then acting as a simple HTTP proxy to forward that request to the connected virtual network. The proxy then relays the response back to you publicly over the internet.
+Now, create the private endpoint for your function app. This private endpoint will connect your function app privately and securely to your virtual network using a private IP address. For more information on private endpoints, go to the [private endpoints documentation](https://docs.microsoft.com/azure/private-link/private-endpoint-overview).
+
+1. In your function app, select **Networking** in the left menu.
+
+1. Select **Click here to configure** under Private Endpoint Connections.
+
+ :::image type="content" source="./media/functions-create-vnet/14-navigate-app-private-endpoint.png" alt-text="Screenshot of how to navigate to a Function App Private Endpoint.":::
+
+1. Select **Add**.
+
+1. On the menu that opens up, use the private endpoint settings as specified below:
+
+ :::image type="content" source="./media/functions-create-vnet/15-create-app-private-endpoint.png" alt-text="Screenshot of how to create a Function App private endpoint.":::
+
+1. Select **Ok** to add the private endpoint. Congratulations! You've successfully secured your function app, service bus, and storage account with private endpoints!
+
+### Test your locked down function app
+
+1. In your function app, select **Functions** from the left menu.
+
+1. Select the **ServiceBusQueueTrigger**.
+
+1. From the left menu, select **Monitor**. you'll see that you're unable to monitor your app. This is because your browser doesn't have access to the virtual network, so it can't directly access resources within the virtual network. We'll now demonstrate another method by which you can still monitor your function, Application Insights.
+
+1. In your function app, select **Application Insights** from the left menu and select **View Application Insights data**.
+
+ :::image type="content" source="./media/functions-create-vnet/16-app-insights.png" alt-text="Screenshot of how to view application insights for a Function App.":::
+
+1. Select **Live metrics** from the left menu.
+
+1. Open a new tab. In your Service Bus, select **Queues** from the left menu.
+
+1. Select your queue.
+
+1. Select **Service Bus Explorer** from the left menu. Under **Send**, choose **Text/Plain** as the **Content Type** and enter a message.
+
+1. Select **Send** to send the message.
+
+ :::image type="content" source="./media/functions-create-vnet/17-send-service-bus-message.png" alt-text="Screenshot of how to send Service Bus messages using portal.":::
+
+1. On the tab with **Live metrics** open, you should see that your Service Bus queue trigger has triggered. If it hasn't, resend the message from **Service Bus Explorer**
+
+ :::image type="content" source="./media/functions-create-vnet/18-hello-world.png" alt-text="Screenshot of how to view messages using live metrics for function apps.":::
+
+1. Congratulations! You've successfully tested your function app set up with private endpoints!
+
+### Private DNS Zones
+Using a private endpoint to connect to Azure resources means connecting to a private IP address instead of the public endpoint. Existing Azure services are configured to use existing DNS to connect to the public endpoint. The DNS configuration will need to be overridden to connect to the private endpoint.
+
+A private DNS zone was created for each Azure resource configured with a private endpoint. A DNS A record is created for each private IP address associated with the private endpoint.
+
+The following DNS zones were created in this tutorial:
+
+- privatelink.file.core.windows.net
+- privatelink.blob.core.windows.net
+- privatelink.servicebus.windows.net
+- privatelink.azurewebsites.net
[!INCLUDE [clean-up-section-portal](../../includes/clean-up-section-portal.md)] ## Next steps
-In this tutorial, the WordPress site serves as an API that is called by using a proxy in the function app. This scenario makes a good tutorial because it's easy to set up and visualize. You could use any other API deployed within a virtual network. You could also have created a function with code that calls APIs deployed within the virtual network. A more realistic scenario is a function that uses data client APIs to call a SQL Server instance deployed in the virtual network.
-
-Functions running in a Premium plan share the same underlying App Service infrastructure as web apps on PremiumV2 plans. All the documentation for [web apps in Azure App Service](../app-service/overview.md) applies to your Premium plan functions.
+In this tutorial, you created a Premium function app, storage account, and Service Bus, and you secured them all behind private endpoints! Learn more about the various networking features available below:
> [!div class="nextstepaction"] > [Learn more about the networking options in Functions](./functions-networking-options.md)
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-dotnet-class-library.md
This article is an introduction to developing Azure Functions by using C# in .NE
As a C# developer, you may also be interested in one of the following articles: | Getting started | Concepts| Guided learning/samples |
-| -- | -- | -- |
+|--| -- |--|
| <ul><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md)</li><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md)</li><li>[Using command line tools](create-first-function-cli-csharp.md)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li><li>[Visual Studio development](functions-develop-vs.md)</li><li>[Dependency injection](functions-dotnet-dependency-injection.md)</li></ul> | <ul><li>[Create serverless applications](/learn/paths/create-serverless-applications/)</li><li>[C# samples](/samples/browse/?products=azure-functions&languages=csharp)</li></ul> | Azure Functions supports C# and C# script programming languages. If you're looking for guidance on [using C# in the Azure portal](functions-create-function-app-portal.md), see [C# script (.csx) developer reference](functions-reference-csharp.md). ## Supported versions
-Versions of the Functions runtime work with specific versions of .NET. The following table shows the highest level of .NET Core and .NET Framework and .NET Core that can be used with a specific version of Functions in your project.
+Versions of the Functions runtime work with specific versions of .NET. To learn more about Functions versions, see [Azure Functions runtime versions overview](functions-versions.md)
+
+The following table shows the highest level of .NET Core or .NET Framework that can be used with a specific version of Functions.
| Functions runtime version | Max .NET version | | - | - |
-| Functions 3.x | .NET Core 3.1<br/>.NET 5.0<sup>*</sup> |
-| Functions 2.x | .NET Core 2.2 |
+| Functions 3.x | .NET Core 3.1<br/>.NET 5.0<sup>1</sup> |
+| Functions 2.x | .NET Core 2.2<sup>2</sup> |
| Functions 1.x | .NET Framework 4.7 |
-<sup>*</sup> Must run [out-of-process](dotnet-isolated-process-guide.md).
+<sup>1</sup> Must run [out-of-process](dotnet-isolated-process-guide.md).
+<sup>2</sup> For details, see [Functions v2.x considerations](#functions-v2x-considerations).
+
+For the latest news about Azure Functions releases, including the removal of specific older minor versions, monitor [Azure App Service announcements](https://github.com/Azure/app-service-announcements/issues).
+
+### Functions v2.x considerations
+
+Function apps that target the latest 2.x version (`~2`) are automatically upgraded to run on .NET Core 3.1. Because of breaking changes between .NET Core versions, not all apps developed and compiled against .NET Core 2.2 can be safely upgraded to .NET Core 3.1. You can opt out of this upgrade by pinning your function app to `~2.0`. Functions also detects incompatible APIs and may pin your app to `~2.0` to prevent incorrect execution on .NET Core 3.1.
+
+>[!NOTE]
+>If your function app is pinned to `~2.0` and you change this version target to `~2`, your function app may break. If you deploy using ARM templates, check the version in your templates. If this occurs, change your version back to target `~2.0` and fix compatibility issues.
+
+Function apps that target `~2.0` continue to run on .NET Core 2.2. This version of .NET Core no longer receives security and other maintenance updates. To learn more, see [this announcement page](https://github.com/Azure/app-service-announcements/issues/266).
+
+You should work to make your functions compatible with .NET Core 3.1 as soon as possible. After you've resolved these issues, change your version back to `~2` or upgrade to `~3`. To learn more about targeting versions of the Functions runtime, see [How to target Azure Functions runtime versions](set-runtime-version.md).
-To learn more, see [Azure Functions runtime versions overview](functions-versions.md)
+When running on Linux in a Premium or dedicated (App Service) plan, you pin your version by instead targeting a specific image by setting the `linuxFxVersion` site config setting to `DOCKER|mcr.microsoft.com/azure-functions/dotnet:2.0.14786-appservice` To learn how to set `linuxFxVersion`, see [Manual version updates on Linux](set-runtime-version.md#manual-version-updates-on-linux).
## Functions class library project
The trigger attribute specifies the trigger type and binds input data to a metho
## Method signature parameters
-The method signature may contain parameters other than the one used with the trigger attribute. Here are some of the additional parameters that you can include:
+The method signature may contain parameters other than the one used with the trigger attribute. Here are some of the other parameters that you can include:
* [Input and output bindings](functions-triggers-bindings.md) marked as such by decorating them with attributes. * An `ILogger` or `TraceWriter` ([version 1.x-only](functions-versions.md#creating-1x-apps)) parameter for [logging](#logging). * A `CancellationToken` parameter for [graceful shutdown](#cancellation-tokens). * [Binding expressions](./functions-bindings-expressions-patterns.md) parameters to get trigger metadata.
-The order of parameters in the function signature does not matter. For example, you can put trigger parameters before or after other bindings, and you can put the logger parameter before or after trigger or binding parameters.
+The order of parameters in the function signature doesn't matter. For example, you can put trigger parameters before or after other bindings, and you can put the logger parameter before or after trigger or binding parameters.
### Output bindings
public static class BindingExpressionsExample
The build process creates a *function.json* file in a function folder in the build folder. As noted earlier, this file is not meant to be edited directly. You can't change binding configuration or disable the function by editing this file.
-The purpose of this file is to provide information to the scale controller to use for [scaling decisions on the Consumption plan](event-driven-scaling.md). For this reason, the file only has trigger info, not input or output bindings.
+The purpose of this file is to provide information to the scale controller to use for [scaling decisions on the Consumption plan](event-driven-scaling.md). For this reason, the file only has trigger info, not input/output bindings.
The generated *function.json* file includes a `configurationSource` property that tells the runtime to use .NET attributes for bindings, rather than *function.json* configuration. Here's an example:
The generated *function.json* file includes a `configurationSource` property tha
The *function.json* file generation is performed by the NuGet package [Microsoft\.NET\.Sdk\.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions).
-The same package is used for both version 1.x and 2.x of the Functions runtime. The target framework is what differentiates a 1.x project from a 2.x project. Here are the relevant parts of *.csproj* files, showing different target frameworks and the same `Sdk` package:
+The same package is used for both version 1.x and 2.x of the Functions runtime. The target framework is what differentiates a 1.x project from a 2.x project. Here are the relevant parts of *.csproj* files, showing different target frameworks with the same `Sdk` package:
# [v2.x+](#tab/v2)
public static class IBinderExample
defines the [Storage blob](functions-bindings-storage-blob.md) input or output binding, and [TextWriter](/dotnet/api/system.io.textwriter) is a supported output binding type.
-### Multiple attribute example
+### Multiple attributes example
The preceding example gets the app setting for the function app's main Storage account connection string (which is `AzureWebJobsStorage`). You can specify a custom app setting to use for the Storage account by adding the [StorageAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs)
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-networking-options.md
To learn more, see [Virtual network service endpoints](../virtual-network/virtua
## Restrict your storage account to a virtual network
-When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoint. This feature currently only works for all Vnet supported skus which includes Standard and Premium, except for on flex stamps where Vnet is available only for Premium sku. To set up a function with a storage account restricted to a private network:
+When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoint. This feature currently works for all virtual network supported skus which includes Standard and Premium, except for on flex stamps where virtual networks are available only for Premium sku. To set up a function with a storage account restricted to a private network:
1. Create a function with a storage account that does not have service endpoints enabled. 1. Configure the function to connect to your virtual network.
When you create a function app, you must create or link to a general-purpose Azu
1. Enable service endpoints or private endpoint for the storage account. * If using private endpoint connections, the storage account will need a private endpoint for the `file` and `blob` subresources. If using certain capabilities like Durable Functions, you will also need `queue` and `table` accessible through a private endpoint connection. * If using service endpoints, enable the subnet dedicated to your function apps for storage accounts.
-1. (Optional) Copy the file and blob content from the function app storage account to the secured storage account and file share.
+1. Copy the file and blob content from the function app storage account to the secured storage account and file share.
1. Copy the connection string for this storage account. 1. Update the **Application Settings** under **Configuration** for the function app to the following: - `AzureWebJobsStorage` to the connection string for the secured storage account.
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-versions.md
Last updated 12/09/2019
# Azure Functions runtime versions overview
-Azure Functions currently supports three versions of the runtime host: 1.x, 2.x, and 3.x. All three versions are supported for production scenarios.
+Azure Functions currently supports three versions of the runtime host: 3.x, 2.x, and 1.x. All three versions are supported for production scenarios.
> [!IMPORTANT] > Version 1.x is in maintenance mode and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. Enhancements are provided only in later versions.
The following table indicates which programming languages are currently supporte
## <a name="creating-1x-apps"></a>Run on a specific version
-By default, function apps created in the Azure portal and by the Azure CLI are set to version 3.x. You can modify this version as needed. You can only change the runtime version to 1.x after you create your function app but before you add any functions. Moving between 2.x and 3.x is allowed even with apps that have functions, but it is still recommended to test in a new app first.
+By default, function apps created in the Azure portal and by the Azure CLI are set to version 3.x. You can modify this version as needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving between 2.x and 3.x is allowed even with apps that have existing functions. Before moving an app with existing functions from 2.x to 3.x, be aware of any [breaking changes between 2.x and 3.x](#breaking-changes-between-2x-and-3x).
-## Migrating from 1.x to later versions
-
-You may choose to migrate an existing app written to use the version 1.x runtime to instead use a newer version. Most of the changes you need to make are related to changes in the language runtime, such as C# API changes between .NET Framework 4.7 and .NET Core. You'll also need to make sure your code and libraries are compatible with the language runtime you choose. Finally, be sure to note any changes in trigger, bindings, and features highlighted below. For the best migration results, you should create a new function app in a new version and port your existing version 1.x function code to the new app.
-
-While it's possible to do an "in-place" upgrade by manually updating the app configuration, going from 1.x to a higher version includes some breaking changes. For example, in C#, the debugging object is changed from `TraceWriter` to `ILogger`. By creating a new version 3.x project, you start off with updated functions based on the latest version 3.x templates.
-
-### Changes in triggers and bindings after version 1.x
-
-Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](./functions-bindings-register.md).
+Before making a change to the major version of the runtime, you should first test your existing code by deploying to another function app running on the latest major version. This testing helps to make sure it runs correctly after the upgrade.
-There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hub `path` property is now `eventHubName`. See the [existing binding table](#bindings) for links to documentation for each binding.
-
-### Changes in features and functionality after version 1.x
+Downgrades from v3.x to v2.x aren't supported. When possible, you should always run your apps on the latest supported version of the Functions runtime.
-A few features were removed, updated, or replaced after version 1.x. This section details the changes you see in later versions after having used version 1.x.
+### Changing version of apps in Azure
-In version 2.x, the following changes were made:
+The version of the Functions runtime used by published apps in Azure is dictated by the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) application setting. The following major runtime version values are supported:
-* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure File storage by default. When upgrading an app from version 1.x to version 2.x, existing secrets that are in file storage are reset.
+| Value | Runtime target |
+| | -- |
+| `~3` | 3.x |
+| `~2` | 2.x |
+| `~1` | 1.x |
-* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
+>[!IMPORTANT]
+> Don't arbitrarily change this setting, because other app setting changes and changes to your function code may be required.
-* The host configuration file (host.json) should be empty or have the string `"version": "2.0"`.
+To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
-* To improve monitoring, the WebJobs dashboard in the portal, which used the [`AzureWebJobsDashboard`](functions-app-settings.md#azurewebjobsdashboard) setting is replaced with Azure Application Insights, which uses the [`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey) setting. For more information, see [Monitor Azure Functions](functions-monitoring.md).
+### Pinning to a specific minor version
-* All functions in a function app must share the same language. When you create a function app, you must choose a runtime stack for the app. The runtime stack is specified by the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) value in application settings. This requirement was added to improve footprint and startup time. When developing locally, you must also include this setting in the [local.settings.json file](functions-run-local.md#local-settings-file).
+To resolve issues with your function app running on the latest major version, you have to pin your app to a specific minor version. This gives you time to get your app running correctly on the latest major version. The way that you pin to a minor version differs between Windows and Linux. To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
-* The default timeout for functions in an App Service plan is changed to 30 minutes. You can manually change the timeout back to unlimited by using the [functionTimeout](functions-host-json.md#functiontimeout) setting in host.json.
+Older minor versions are periodically removed from Functions. For the latest news about Azure Functions releases, including the removal of specific older minor versions, monitor [Azure App Service announcements](https://github.com/Azure/app-service-announcements/issues).
-* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
+### Pinning to version ~2.0
-* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (.fsx) functions has been removed. Compiled F# functions (.fs) are still supported.
+.NET function apps running on version 2.x (`~2`) are automatically upgraded to run on .NET Core 3.1, which is a long-term support version of .NET Core 3. Running your .NET functions on .NET Core 3.1 allows you to take advantage of the latest security updates and product enhancements.
-* The URL format of Event Grid trigger webhooks has been changed to `https://{app}/runtime/webhooks/{triggerName}`.
+Any function app pinned to `~2.0` continues to run on .NET Core 2.2, which no longer receives security and other updates. To learn more, see [Functions v2.x considerations](functions-dotnet-class-library.md#functions-v2x-considerations).
## Migrating from 2.x to 3.x
The following are the changes to be aware of before upgrading a 2.x app to 3.x.
* Node.js 8 is no longer supported and will not execute in 3.x functions.
-#### .NET
+#### .NET Core
+
+The main differences between versions when running .NET class library functions is the .NET Core runtime. Functions version 2.x is designed to run on .NET Core 2.2 and version 3.x is designed to run on .NET Core 3.1.
* [Synchronous server operations are disabled by default](/dotnet/core/compatibility/2.2-3.0#http-synchronous-io-disabled-in-all-servers).
-### Changing version of apps in Azure
+* Breaking changes introduced by .NET Core in [version 3.1](/dotnet/core/compatibility/3.1) and [version 3.0](/dotnet/core/compatibility/3.0), which aren't specific to Functions but might still affect your app.
-The version of the Functions runtime used by published apps in Azure is dictated by the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) application setting. The following major runtime version values are supported:
+>[!NOTE]
+>Due to support issues with .NET Core 2.2, function apps pinned to version 2 (`~2`) are essentially running on .NET Core 3.1. To learn more, see [Functions v2.x compatibility mode](functions-dotnet-class-library.md#functions-v2x-considerations).
-| Value | Runtime target |
-| | -- |
-| `~3` | 3.x |
-| `~2` | 2.x |
-| `~1` | 1.x |
+## Migrating from 1.x to later versions
->[!IMPORTANT]
-> Don't arbitrarily change this setting, because other app setting changes and changes to your function code may be required.
+You may choose to migrate an existing app written to use the version 1.x runtime to instead use a newer version. Most of the changes you need to make are related to changes in the language runtime, such as C# API changes between .NET Framework 4.7 and .NET Core. You'll also need to make sure your code and libraries are compatible with the language runtime you choose. Finally, be sure to note any changes in trigger, bindings, and features highlighted below. For the best migration results, you should create a new function app in a new version and port your existing version 1.x function code to the new app.
+
+While it's possible to do an "in-place" upgrade by manually updating the app configuration, going from 1.x to a higher version includes some breaking changes. For example, in C#, the debugging object is changed from `TraceWriter` to `ILogger`. By creating a new version 3.x project, you start off with updated functions based on the latest version 3.x templates.
+
+### Changes in triggers and bindings after version 1.x
+
+Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](./functions-bindings-register.md).
+
+There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hub `path` property is now `eventHubName`. See the [existing binding table](#bindings) for links to documentation for each binding.
+
+### Changes in features and functionality after version 1.x
+
+A few features were removed, updated, or replaced after version 1.x. This section details the changes you see in later versions after having used version 1.x.
+
+In version 2.x, the following changes were made:
+
+* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure File storage by default. When upgrading an app from version 1.x to version 2.x, existing secrets that are in file storage are reset.
+
+* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
+
+* The host configuration file (host.json) should be empty or have the string `"version": "2.0"`.
+
+* To improve monitoring, the WebJobs dashboard in the portal, which used the [`AzureWebJobsDashboard`](functions-app-settings.md#azurewebjobsdashboard) setting is replaced with Azure Application Insights, which uses the [`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey) setting. For more information, see [Monitor Azure Functions](functions-monitoring.md).
+
+* All functions in a function app must share the same language. When you create a function app, you must choose a runtime stack for the app. The runtime stack is specified by the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) value in application settings. This requirement was added to improve footprint and startup time. When developing locally, you must also include this setting in the [local.settings.json file](functions-run-local.md#local-settings-file).
+
+* The default timeout for functions in an App Service plan is changed to 30 minutes. You can manually change the timeout back to unlimited by using the [functionTimeout](functions-host-json.md#functiontimeout) setting in host.json.
+
+* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
+
+* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (.fsx) functions has been removed. Compiled F# functions (.fs) are still supported.
+
+* The URL format of Event Grid trigger webhooks has been changed to `https://{app}/runtime/webhooks/{triggerName}`.
### Locally developed application versions
You can make the following updates to function apps to locally change the target
In Visual Studio, you select the runtime version when you create a project. Azure Functions tools for Visual Studio supports the three major runtime versions. The correct version is used when debugging and publishing based on project settings. The version settings are defined in the `.csproj` file in the following properties:
-##### Version 1.x
+##### Version 3.x
```xml
-<TargetFramework>net472</TargetFramework>
-<AzureFunctionsVersion>v1</AzureFunctionsVersion>
+<TargetFramework>netcoreapp3.1</TargetFramework>
+<AzureFunctionsVersion>v3</AzureFunctionsVersion>
```
+> [!NOTE]
+> Azure Functions 3.x and .NET requires the `Microsoft.NET.Sdk.Functions` extension be at least `3.0.0`.
+ ##### Version 2.x ```xml
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v2</AzureFunctionsVersion> ```
-##### Version 3.x
+##### Version 1.x
```xml
-<TargetFramework>netcoreapp3.1</TargetFramework>
-<AzureFunctionsVersion>v3</AzureFunctionsVersion>
+<TargetFramework>net472</TargetFramework>
+<AzureFunctionsVersion>v1</AzureFunctionsVersion>
```
-> [!NOTE]
-> Azure Functions 3.x and .NET requires the `Microsoft.NET.Sdk.Functions` extension be at least `3.0.0`.
- ###### Updating 2.x apps to 3.x in Visual Studio
-You can open an existing function targeting 2.x and move to 3.x by editing the `.csproj` file and updating the values above. Visual Studio manages runtime versions automatically for you based on project metadata. However, it's possible if you have never created a 3.x app before that Visual Studio doesn't yet have the templates and runtime for 3.x on your machine. This may present itself with an error like "no Functions runtime available that matches the version specified in the project." To fetch the latest templates and runtime, go through the experience to create a new function project. When you get to the version and template select screen, wait for Visual Studio to complete fetching the latest templates. Once the latest .NET Core 3 templates are available and displayed you should be able to run and debug any project configured for version 3.x.
+You can open an existing function targeting 2.x and move to 3.x by editing the `.csproj` file and updating the values above. Visual Studio manages runtime versions automatically for you based on project metadata. However, it's possible if you have never created a 3.x app before that Visual Studio doesn't yet have the templates and runtime for 3.x on your machine. This may present itself with an error like "no Functions runtime available that matches the version specified in the project." To fetch the latest templates and runtime, go through the experience to create a new function project. When you get to the version and template select screen, wait for Visual Studio to complete fetching the latest templates. After the latest .NET Core 3 templates are available and displayed, you can run and debug any project configured for version 3.x.
> [!IMPORTANT] > Version 3.x functions can only be developed in Visual Studio if using Visual Studio version 16.4 or newer. #### VS Code and Azure Functions Core Tools
-[Azure Functions Core Tools](functions-run-local.md) is used for command line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. To develop against version 3.x, install version 3.x of the Core Tools. Version 2.x development requires version 2.x of the Core Tools, and so on. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
+[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. To develop against version 3.x, install version 3.x of the Core Tools. Version 2.x development requires version 2.x of the Core Tools, and so on. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
For Visual Studio Code development, you may also need to update the user setting for the `azureFunctions.projectRuntime` to match the version of the tools installed. This setting also updates the templates and languages used during function app creation. To create apps in `~3` you would update the `azureFunctions.projectRuntime` user setting to `~3`.
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/set-runtime-version.md
Last updated 07/22/2020
# How to target Azure Functions runtime versions
-A function app runs on a specific version of the Azure Functions runtime. There are three major versions: [1.x, 2.x, and 3.x](functions-versions.md). By default, function apps are created in version 3.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
+A function app runs on a specific version of the Azure Functions runtime. There are three major versions: [3.x, 2.x, and 1.x](functions-versions.md). By default, function apps are created in version 3.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
-The way that you manually target a specific version depends on whether you are running Windows or Linux.
+The way that you manually target a specific version depends on whether you're running Windows or Linux.
## Automatic and manual version updates
_This section doesn't apply when running your function app [on Linux](#manual-ve
Azure Functions lets you target a specific version of the runtime on Windows by using the `FUNCTIONS_EXTENSION_VERSION` application setting in a function app. The function app is kept on the specified major version until you explicitly choose to move to a new version. If you specify only the major version, the function app is automatically updated to new minor versions of the runtime when they become available. New minor versions shouldn't introduce breaking changes.
-If you specify a minor version (for example, "2.0.12345"), the function app is pinned to that specific version until you explicitly change it. Older minor versions are regularly removed from the production environment. After this occurs, your function app runs on the latest version instead of the version set in `FUNCTIONS_EXTENSION_VERSION`. Because of this, you should quickly resolve any issues with your function app that require a specific minor version, so that you can instead target the major version. Minor version removals are announced in [App Service announcements](https://github.com/Azure/app-service-announcements/issues).
+If you specify a minor version (for example, "2.0.12345"), the function app is pinned to that specific version until you explicitly change it. Older minor versions are regularly removed from the production environment. If your minor version gets removed, your function app goes back to running on the latest version instead of the version set in `FUNCTIONS_EXTENSION_VERSION`. As such, you should quickly resolve any issues with your function app that require a specific minor version. Then, you can return to targeting the major version. Minor version removals are announced in [App Service announcements](https://github.com/Azure/app-service-announcements/issues).
> [!NOTE] > If you pin to a specific major version of Azure Functions, and then try to publish to Azure using Visual Studio, a dialog window will pop up prompting you to update to the latest version or cancel the publish. To avoid this, add the `<DisableFunctionExtensionVersionUpdate>true</DisableFunctionExtensionVersionUpdate>` property in your `.csproj` file.
The following table shows the `FUNCTIONS_EXTENSION_VERSION` values for each majo
A change to the runtime version causes a function app to restart.
+>[!NOTE]
+>.NET Function apps pinned to `~2.0` opt out of the automatic upgrade to .NET Core 3.1. To learn more, see [Functions v2.x considerations](functions-dotnet-class-library.md#functions-v2x-considerations).
+ ## View and update the current runtime version _This section doesn't apply when running your function app [on Linux](#manual-version-updates-on-linux)._
-You can change the runtime version used by your function app. Because of the potential of breaking changes, you can only change the runtime version before you have created any functions in your function app.
+You can change the runtime version used by your function app. Because of the potential of breaking changes, you can only change the runtime version before you've created any functions in your function app.
> [!IMPORTANT]
-> Although the runtime version is determined by the `FUNCTIONS_EXTENSION_VERSION` setting, you should make this change in the Azure portal and not by changing the setting directly. This is because the portal validates your changes and makes other related changes as needed.
+> Although the runtime version is determined by the `FUNCTIONS_EXTENSION_VERSION` setting, you should only make this change in the Azure portal and not by changing the setting directly. This is because the portal validates your changes and makes other related changes as needed.
# [Portal](#tab/portal)
az functionapp config appsettings set --name <FUNCTION_APP> \
Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<VERSION>` with either a specific version, or `~3`, `~2`, or `~1`.
-You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
+Choose **Try it** in the previous code example to run the command in [Azure Cloud Shell](../cloud-shell/overview.md). You can also run the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command. When running locally, you must first run [az login](/cli/azure/reference-index#az-login) to sign in.
# [PowerShell](#tab/powershell)
Set `LinuxFxVersion` to `DOCKER|mcr.microsoft.com/azure-functions/node:3.0.13142
For **linux consumption apps** - Set `LinuxFxVersion` to `DOCKER|mcr.microsoft.com/azure-functions/mesh:3.0.13142-node10`.
+# [Portal](#tab/portal)
-# [Azure CLI](#tab/azurecli-linux)
+Viewing and modifying site config settings for function apps isn't supported in the Azure portal. Use the Azure CLI instead.
+
+# [Azure CLI](#tab/azurecli)
-You can view and set the `LinuxFxVersion` from the Azure CLI.
+You can view and set the `LinuxFxVersion` by using the Azure CLI.
-Using the Azure CLI, view the current runtime version with the [az functionapp config show](/cli/azure/functionapp/config) command.
+To view the current runtime version, use with the [az functionapp config show](/cli/azure/functionapp/config) command.
```azurecli-interactive az functionapp config show --name <function_app> \resource-group <my_resource_group>
+--resource-group <my_resource_group> --query 'linuxFxVersion' -o tsv
```
-In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app.
-
-You see the `linuxFxVersion` in the following output, which has been truncated for clarity:
+In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app. The current value of `linuxFxVersion` is returned.
-```output
-{
- ...
-
- "kind": null,
- "limits": null,
- "linuxFxVersion": <LINUX_FX_VERSION>,
- "loadBalancing": "LeastRequests",
- "localMySqlEnabled": false,
- "location": "West US",
- "logsDirectorySizeLimit": 35,
- ...
-}
-```
-
-You can update the `linuxFxVersion` setting in the function app with the [az functionapp config set](/cli/azure/functionapp/config) command.
+To update the `linuxFxVersion` setting in the function app, use the [az functionapp config set](/cli/azure/functionapp/config) command.
```azurecli-interactive az functionapp config set --name <FUNCTION_APP> \
az functionapp config set --name <FUNCTION_APP> \
--linux-fx-version <LINUX_FX_VERSION> ```
-Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the values explained above.
+Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the value of a specific image as described above.
You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
+# [PowerShell](#tab/powershell)
-Similarly, the function app restarts after the change is made to the site config.
-
-> [!NOTE]
-> Note that setting `LinuxFxVersion` to image url directly for consumption apps will opt them out of placeholders and other cold start optimizations.
+Azure PowerShell can't be used to set the `linuxFxVersion` at this time. Use the Azure CLI instead.
+The function app restarts after the change is made to the site config.
+
+> [!NOTE]
+> For apps running in a Consumption plan, setting `LinuxFxVersion` to a specific image may increase cold start times. This is because pinning to a specific image prevents Functions from using some cold start optimizations.
+ ## Next steps > [!div class="nextstepaction"]
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/azure-secure-isolation-guidance.md
Previously updated : 10/21/2020 Last updated : 03/04/2021 # Azure guidance for secure isolation Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides customers with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help customers increase efficiency and unlock insights into their operations and performance.
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles: (1) user access controls with authentication and identity separation, (2) compute isolation for processing, (3) networking isolation including data encryption in transit, (4) storage isolation with data encryption at rest, and (5) security assurance processes embedded in service design to correctly develop logically isolated services.
-This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
+This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
## Executive summary Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides customers with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help customers increase efficiency and unlock insights into their operations and performance.
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
-Multi-tenancy in the public cloud improves efficiency by multiplexing resources among disparate customers at low costs; however, this approach introduces the perceived risk associated with resource sharing. Azure addresses this risk by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a multi-layered approach depicted in Figure 1.
+Multi-tenancy in the public cloud improves efficiency by multiplexing resources among disparate customers at low costs; however, this approach introduces the perceived risk associated with resource sharing. Azure addresses this risk by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a multi-layered approach depicted in Figure 1.
:::image type="content" source="./media/secure-isolation-fig1.png" alt-text="Azure isolation approaches" border="false"::: **Figure 1.** Azure isolation approaches A brief summary of isolation approaches is provided below. -- **User access controls with authentication and identity separation** ΓÇô All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure Active Directory (Azure AD) that customer organization receives and owns when they sign up for a Microsoft cloud service. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users.-- **Compute isolation** ΓÇô Azure provides customers with both logical and physical compute isolation for processing. Logical isolation is implemented via:
+- **User access controls with authentication and identity separation** ΓÇô All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure Active Directory (Azure AD) that customer organization receives and owns when they sign up for a Microsoft cloud service. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users.
+- **Compute isolation** ΓÇô Azure provides customers with both logical and physical compute isolation for processing. Logical isolation is implemented via:
- *Hypervisor isolation* for services that provide cryptographically certain isolation by using separate virtual machines and leveraging Azure Hypervisor isolation.
- - *Drawbridge isolation* inside a Virtual Machine (VM) for services that provide cryptographically certain isolation for workloads running on the same virtual machine by leveraging isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code.
+ - *Drawbridge isolation* inside a virtual machine (VM) for services that provide cryptographically certain isolation for workloads running on the same virtual machine by leveraging isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code.
- *User context-based isolation* for services that are comprised solely of Microsoft-controlled code and customer code is not allowed to run. </br> In addition to robust logical compute isolation available by design to all Azure tenants, customers who desire physical compute isolation can utilize Azure Dedicated Host or Isolated Virtual Machines, which are deployed on server hardware dedicated to a single customer.-- **Networking isolation** ΓÇô Azure Virtual Network (VNet) helps ensure that each customerΓÇÖs private network traffic is logically isolated from traffic belonging to other customers. Services can communicate using public IPs or private (VNet) IPs. Communication between customer VMs remains private within a VNet. Customers can connect their VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on their connectivity options, including bandwidth, latency, and encryption requirements. Customers can use [Network Security Groups](../virtual-network/network-security-groups-overview.md) (NSGs) to achieve network isolation and protect their Azure resources from the Internet while accessing Azure services that have public endpoints. Customers can use Virtual network [service tags](../virtual-network/service-tags-overview.md) to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules. Moreover, customers can use [Azure Private Link](../private-link/private-link-overview.md) to access Azure PaaS services over a private endpoint in their VNet, ensuring that traffic between their VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet. Finally, Azure provides customers with options to encrypt data in transit, including [Transport Layer Security (TLS) end-to-end encryption](../application-gateway/ssl-overview.md) of network traffic with [TLS termination using Azure Key Vault certificates](../application-gateway/key-vault-certs.md), [VPN encryption](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) using IPsec, and ExpressRoute encryption using [MACsec with customer-managed keys (CMK) support](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq).-- **Storage isolation** ΓÇô To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers. This process relies on multiple encryption keys, as well as services such as Azure Key Vault and Azure AD to ensure secure key access and centralized key management. Azure Storage Service Encryption (SSE) ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is [encrypted through FIPS 140-2 validated 256-bit AES encryption](../storage/common/storage-service-encryption.md#about-azure-storage-encryption) and customers have the option to use Azure Key Vault for customer-managed keys (CMK). Azure SSE encrypts the page blobs that store Azure Virtual Machine disks. Additionally, Azure Disk Encryption (ADE) may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of customer data stored in Azure. This encryption includes managed disks.-- **Security assurance processes and practices** ΓÇô Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
+- **Networking isolation** ΓÇô Azure Virtual Network (VNet) helps ensure that each customerΓÇÖs private network traffic is logically isolated from traffic belonging to other customers. Services can communicate using public IPs or private (VNet) IPs. Communication between customer VMs remains private within a VNet. Customers can connect their VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on their connectivity options, including bandwidth, latency, and encryption requirements. Customers can use [network security groups](../virtual-network/network-security-groups-overview.md) (NSGs) to achieve network isolation and protect their Azure resources from the Internet while accessing Azure services that have public endpoints. Customers can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules. Moreover, customers can use [Azure Private Link](../private-link/private-link-overview.md) to access Azure PaaS services over a private endpoint in their VNet, ensuring that traffic between their VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet. Finally, Azure provides customers with options to encrypt data in transit, including [Transport Layer Security (TLS) end-to-end encryption](../application-gateway/ssl-overview.md) of network traffic with [TLS termination using Azure Key Vault certificates](../application-gateway/key-vault-certs.md), [VPN encryption](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) using IPsec, and ExpressRoute encryption using [MACsec with customer-managed keys (CMK) support](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq).
+- **Storage isolation** ΓÇô To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers. This process relies on multiple encryption keys, as well as services such as Azure Key Vault and Azure AD to ensure secure key access and centralized key management. Azure Storage service encryption ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is [encrypted through FIPS 140-2 validated 256-bit AES encryption](../storage/common/storage-service-encryption.md#about-azure-storage-encryption) and customers have the option to use Azure Key Vault for customer-managed keys (CMK). Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, Azure Disk encryption may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of customer data stored in Azure. This encryption includes managed disks.
+- **Security assurance processes and practices** ΓÇô Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
-In line with the [shared responsibility](../security/fundamentals/shared-responsibility.md) model in cloud computing, as customer workloads get migrated from an on-premises datacenter to the cloud, the delineation of responsibility between the customer and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and customers are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. Customers can leverage Azure isolation technologies to achieve the desired level of isolation for their applications and data deployed in the cloud.
+In line with the [shared responsibility](../security/fundamentals/shared-responsibility.md) model in cloud computing, as customer workloads get migrated from an on-premises datacenter to the cloud, the delineation of responsibility between the customer and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and customers are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. Customers can leverage Azure isolation technologies to achieve the desired level of isolation for their applications and data deployed in the cloud.
-Throughout this article, call-out boxes outline important considerations or actions considered to be part of customerΓÇÖs responsibility. For example, customers can use Azure Key Vault to store their secrets, including encryption keys that remain under customer control.
+Throughout this article, call-out boxes outline important considerations or actions considered to be part of customerΓÇÖs responsibility. For example, customers can use Azure Key Vault to store their secrets, including encryption keys that remain under customer control.
> [!NOTE] > Use of Azure Key Vault for Customer Managed Keys (CMK) is optional and represents customerΓÇÖs responsibility.
Throughout this article, call-out boxes outline important considerations or acti
> *Additional resources:* > - How to **[get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)**
-This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
+This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
> [!TIP]
-> For recommendations on how to improve the security of applications and data deployed in Azure, customers should review the **[Azure Security Benchmark](../security/benchmarks/overview.md)**.
+> For recommendations on how to improve the security of applications and data deployed in Azure, customers should review the **[Azure Security Benchmark](../security/benchmarks/index.yml)**.
## Identity-based isolation [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is an identity repository and cloud service that provides authentication, authorization, and access control for an organizationΓÇÖs users, groups, and objects. Azure AD can be used as a standalone cloud directory or as an integrated solution with existing on-premises Active Directory to enable key enterprise features such as directory synchronization and single sign-on.
-Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscriptions/) is associated with an Azure AD tenant. Using [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md), users, groups, and applications from that directory can be granted access to resources in the Azure subscription. For example, a storage account can be placed in a resource group to control access to that specific storage account using Azure AD. Azure Storage defines a set of Azure built-in roles that encompass common permissions used to access blob or queue data. A request to Azure Storage can be authorized using either customerΓÇÖs Azure AD account or the Storage Account Key. In this manner, only specific users can be given the ability to access data in Azure Storage.
+Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscriptions/) is associated with an Azure AD tenant. Using [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md), users, groups, and applications from that directory can be granted access to resources in the Azure subscription. For example, a storage account can be placed in a resource group to control access to that specific storage account using Azure AD. Azure Storage defines a set of Azure built-in roles that encompass common permissions used to access blob or queue data. A request to Azure Storage can be authorized using either customerΓÇÖs Azure AD account or the Storage Account Key. In this manner, only specific users can be given the ability to access data in Azure Storage.
-All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure AD that customer organization receives and owns when they sign up for a Microsoft cloud service. Authentication to the Azure portal is performed through Azure AD using an identity created either in Azure AD or federated with an on-premises Active Directory. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users. This access restriction is an overarching goal of the [Zero Trust model](https://aka.ms/Zero-Trust), which assumes that the network is compromised and requires a fundamental shift from the perimeter security model. When evaluating access requests, all requesting users, devices, and applications should be considered untrusted until their integrity can be validated in line with the Zero Trust [design principles](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/). Azure AD provides the strong, adaptive, standards-based identity verification required in a Zero Trust framework.
+### Zero Trust architecture
-> [!TIP]
-> To learn more about how to implement Zero Trust architecture on Azure, read the **[6-part blog series](https://devblogs.microsoft.com/azuregov/implementing-zero-trust-with-microsoft-azure-identity-and-access-management-1-of-6/)**.
+All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure AD that customer organization receives and owns when they sign up for a Microsoft cloud service. Authentication to the Azure portal is performed through Azure AD using an identity created either in Azure AD or federated with an on-premises Active Directory. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users. This access restriction is an overarching goal of the [Zero Trust model](https://aka.ms/Zero-Trust), which assumes that the network is compromised and requires a fundamental shift from the perimeter security model. When evaluating access requests, all requesting users, devices, and applications should be considered untrusted until their integrity can be validated in line with the Zero Trust [design principles](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/). Azure AD provides the strong, adaptive, standards-based identity verification required in a Zero Trust framework.
+
+> [!NOTE]
+> Additional resources:
+>
+> - To learn more about how to implement Zero Trust architecture on Azure, read the **[6-part blog series](https://devblogs.microsoft.com/azuregov/implementing-zero-trust-with-microsoft-azure-identity-and-access-management-1-of-6/)**.
+> - For definitions and general deployment models, see **[NIST SP 800-207](https://csrc.nist.gov/publications/detail/sp/800-207/final)** *Zero Trust Architecture*.
### Azure Active Directory
-The separation of the accounts used to administer cloud applications is critical to achieving logical isolation. Account isolation in Azure is achieved using Azure Active Directory (Azure AD) and its capabilities to support granular Azure role-based access control (Azure RBAC). Each Azure account is associated with one Azure AD tenant. Users, groups, and applications from that directory can manage resources in Azure. Customers can assign appropriate access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. Each Azure AD tenant is distinct and separate from other Azure ADs. An Azure AD instance is logically isolated using security boundaries to prevent customer data and identity information from comingling, thereby ensuring that users and administrators of one Azure AD cannot access or compromise data in another Azure AD instance, either maliciously or accidentally. Azure AD runs physically isolated on dedicated servers that are logically isolated to a dedicated network segment and where host-level packet filtering and Windows Firewall services provide additional protections from untrusted traffic.
+The separation of the accounts used to administer cloud applications is critical to achieving logical isolation. Account isolation in Azure is achieved using [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) and its capabilities to support granular [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC). Each Azure account is associated with one Azure AD tenant. Users, groups, and applications from that directory can manage resources in Azure. Customers can assign appropriate access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. Each Azure AD tenant is distinct and separate from other Azure ADs. An Azure AD instance is logically isolated using security boundaries to prevent customer data and identity information from co-mingling, thereby ensuring that users and administrators of one Azure AD cannot access or compromise data in another Azure AD instance, either maliciously or accidentally. Azure AD runs physically isolated on dedicated servers that are logically isolated to a dedicated network segment and where host-level packet filtering and Windows Firewall services provide additional protections from untrusted traffic.
Azure AD implements extensive **data protection features**, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper [Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).
Tenant isolation in Azure AD involves two primary elements:
- Preventing data leakage and access across tenants, which means that data belonging to Tenant A cannot in any way be obtained by users in Tenant B without explicit authorization by Tenant A. - Resource access isolation across tenants, which means that operations performed by Tenant A cannot in any way impact access to resources for Tenant B.
-As shown in Figure 2, access via Azure AD requires user authentication through a Security Token Service (STS). The authorization system uses information on the userΓÇÖs existence and enabled state (through the Directory Services API) and Azure RBAC to determine whether the requested access to the target Azure AD instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Azure AD further supports logical isolation in Azure through:
+As shown in Figure 2, access via Azure AD requires user authentication through a Security Token Service (STS). The authorization system uses information on the userΓÇÖs existence and enabled state (through the Directory Services API) and Azure RBAC to determine whether the requested access to the target Azure AD instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Azure AD further supports logical isolation in Azure through:
- Azure AD instances are discrete containers and there is no relationship between them.-- Azure AD data is stored in partitions and each partition has a pre-determined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Azure AD services to support identity separation and logical isolation.
+- Azure AD data is stored in partitions and each partition has a pre-determined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Azure AD services to support identity separation and logical isolation.
- Access is not permitted across Azure AD instances unless the Azure AD instance administrator grants it through federation or provisioning of user accounts from other Azure AD instances. - Physical access to servers that comprise the Azure AD service and direct access to Azure ADΓÇÖs back-end systems is restricted to properly authorized Microsoft operational roles using Just-In-Time (JIT) privileged access management system. - Azure AD users have no access to physical assets or locations, and therefore it is not possible for them to bypass the logical Azure RBAC policy checks.
Azure has extensive support to safeguard customer data using [data encryption](.
- Server-side encryption that uses service-managed keys, customer-managed keys in Azure, or customer-managed keys on customer-controlled hardware. - Client-side encryption that enables customers to manage and store keys on-premises or in another secure location.
-Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible. More information about **data encryption in transit** is provided in *[Networking isolation](#networking-isolation)* section, whereas **data encryption at rest** is covered in *[Storage isolation](#storage-isolation)* section.
-
-Proper protection and management of encryption keys is essential for data security. Azure Key Vault is a multi-tenant key management service that Microsoft recommends for managing and controlling access to encryption keys when seamless integration with Azure services is required. Azure Key Vault enables customers to store their encryption keys in a Hardware Security Module (HSM). For customers who require single-tenant key management service, Microsoft provides Azure Dedicated HSM.
+Data encryption provides isolation assurances that are tied directly to encryption (cryptographic) key access. Since Azure uses strong ciphers for data encryption, only entities with access to cryptographic keys can have access to data. Deleting or revoking cryptographic keys renders the corresponding data inaccessible. More information about **data encryption in transit** is provided in *[Networking isolation](#networking-isolation)* section, whereas **data encryption at rest** is covered in *[Storage isolation](#storage-isolation)* section.
### Azure Key Vault
-[Azure Key Vault](../key-vault/general/overview.md) is a multi-tenant secrets management service that uses Hardware Security Modules (HSMs) to store and safeguard [secrets, encryption keys, and certificates](../key-vault/general/about-keys-secrets-certificates.md). Key Vault uses Federal Information Processing Standard (FIPS) 140-2 Level 2 validated HSMs, which meet security requirements covering 11 areas related to the design and implementation of a cryptographic module. For each area, the cryptographic module receives a security level rating 1 to 4 (from lowest to highest) depending on the requirements met. The Key Vault uses nCipher nShield family of HSMs that have an overall Security Level 2 rating (certificate [#2643](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/Certificate/2643)), which includes requirements for physical tamper evidence and role-based authentication. However, it meets Security Level 3 rating for several areas, including physical security, electromagnetic interference / electromagnetic compatibility (EMI/EMC), design assurance, and roles, services, and authentication.
-The Azure Key Vault service provides an abstraction over the underlying HSMs. It provides a REST API to enable use from cloud applications and authentication through Azure AD to allow an organization to centralize and customize authentication, disaster recovery, high availability, and elasticity. Azure Key Vault supports RSA keys of sizes 2048-bit, 3072-bit and 4096-bit, as well as Elliptic Curve key types P-256, P-384, P-521, and P-256K (SECP256K1).
+Proper protection and management of cryptographic keys is essential for data security. **[Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets.** The Key Vault service supports two resource types that are described in the rest of this section:
-Azure Key Vault can handle requesting and renewing certificates, including Transport Layer Security (TLS) certificates, enabling customers to enroll and automatically renew certificates from supported public Certificate Authorities. Azure Key Vault certificates support provides for the management of customerΓÇÖs X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](../key-vault/certificates/create-certificate.md) through Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
+- **Vault** supports software-protected and hardware security module (HSM)-protected secrets, keys, and certificates.
+- **Managed HSM** supports only HSM-protected cryptographic keys.
-With Azure Key Vault, customers can [import or generate encryption keys](../key-vault/keys/hsm-protected-keys.md) in HSMs that never leave the HSM protection boundary to support Bring Your Own Key (BYOK) scenarios, as shown in Figure 3. **Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no cleartext version of the key outside the HSMs.** This binding is enforced by the underlying HSM.
+**Customers who require additional security for their most sensitive customer data stored in Azure services can encrypt it using their own encryption keys they control in Azure Key Vault.**
-**Figure 3.** Azure Key Vault support for Bring Your Own Key (BYOK)
+The Azure Key Vault service provides an abstraction over the underlying HSMs. It provides a REST API to enable service use from cloud applications and authentication through [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) to allow an organization to centralize and customize authentication, disaster recovery, high availability, and elasticity. Azure Key Vault supports [cryptographic keys](../key-vault/keys/about-keys.md) of various types, sizes, and curves, including RSA and Elliptic Curve keys. With managed HSMs, support is also available for AES symmetric keys.
-**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer cryptographic keys.**
+With Azure Key Vault, customers can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios, as shown in Figure 3. **Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM. BYOK functionality is available with both [key vaults](../key-vault/keys/hsm-protected-keys.md) and [managed HSMs](../key-vault/managed-hsm/hsm-protected-keys-byok.md). Methods for transferring HSM-protected keys to Azure Key Vault vary depending on the underlying HSM, as explained in online documentation.
-Azure Key Vault provides features for a robust solution for encryption key and certificate lifecycle management. Upon creation, every key vault is automatically associated with the Azure Active Directory (Azure AD) tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault must be authenticated by Azure AD, as described in Azure Key Vault [security overview](../key-vault/general/security-overview.md):
+**Figure 3.** Azure Key Vault support for bring your own key (BYOK)
-- Authentication establishes the identity of the caller (user or application).-- Authorization determines which operations the caller can perform, based on a combination of Azure role-based access control (Azure RBAC) and Azure Key Vault policies.
+**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer cryptographic keys.**
-Azure AD enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, as described previously in *[Azure Active Directory](#azure-active-directory)* section. [Access to a key vault](../key-vault/general/secure-your-key-vault.md) is controlled through two interfaces or planes: management plane and data plane.
+Azure Key Vault provides a robust solution for encryption key lifecycle management. Upon creation, every key vault or managed HSM is automatically associated with the Azure AD tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault or managed HSM must be properly authenticated and authorized:
-- **Management plane** enables customers to manage Key Vault itself, e.g., create and delete key vaults, retrieve key vault properties, and update access policies.-- **Data plane** enables customers to work with the data stored in their key vaults, including adding, deleting, and modifying their keys, secrets, and certificates.
+- Authentication establishes the identity of the caller (user or application).
+- Authorization determines which operations the caller can perform, based on a combination of [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC) and key vault access policy or managed HSM local RBAC.
-To access a key vault in either plane, all callers (users or applications) must have proper authentication and authorization. Both planes use Azure AD for authentication. For authorization, the management plane uses Azure RBAC and the data plane uses Key Vault access policy.
+Azure AD enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, as described previously in *[Azure Active Directory](#azure-active-directory)* section. Access to a key vault or managed HSM is controlled through two interfaces or planes - management plane and data plane - with both planes using Azure AD for authentication.
-When customers create a key vault in a resource group, they can manage access by using Azure AD, which enables customers to grant access at a specific scope level by assigning the appropriate Azure roles. For example, to grant access to a user to manage key vaults, customers can assign a predefined key vault Contributor role to the user at a specific scope, including subscription, resource group, or specific resource.
+- **Management plane** enables customers to manage the key vault or managed HSM itself, for example, create and delete key vaults or managed HSMs, retrieve key vault or managed HSM properties, and update access policies. For authorization, the management plane uses Azure RBAC with both key vaults and managed HSMs.
+- **Data plane** enables customers to work with the data stored in their key vaults and managed HSMs, including adding, deleting, and modifying their data. For vaults, stored data can include keys, secrets, and certificates. For managed HSMs, stored data is limited to cryptographic keys only. For authorization, the data plane uses [Key Vault access policy](../key-vault/general/assign-access-policy-portal.md) and [Azure RBAC for data plane operations](../key-vault/general/rbac-guide.md) with key vaults, or [managed HSM local RBAC](../key-vault/managed-hsm/access-control.md) with managed HSMs.
-> [!IMPORTANT]
-> Customers should control tightly who has Contributor role access to their key vaults. If a user has Contributor permissions to a key vault management plane, the user can gain access to the data plane by setting a key vault access policy.
->
-> *Additional resources:*
-> - How to **[secure access to a key vault](../key-vault/general/secure-your-key-vault.md)**
+When you create a key vault or managed HSM in an Azure subscription, it's automatically associated with the Azure AD tenant of the subscription. All callers in both planes must register in this tenant and authenticate to access the [key vault](../key-vault/general/secure-your-key-vault.md) or [managed HSM](../key-vault/managed-hsm/access-control.md).
-Azure customers control access permissions and can extract detailed activity logs from the Azure Key Vault service. Azure Key Vault logs the following information:
+Azure customers control access permissions and can extract detailed activity logs from the Azure Key Vault service. Azure Key Vault logs the following information:
- All authenticated REST API requests, including failed requests - Operations on the key vault such as creation, deletion, setting access policies, etc.
- - Operations on keys and secrets in the key vault, including a) creating, modifying, or deleting keys or secretes, and b) signing, verifying, encrypting keys, etc.
+ - Operations on keys and secrets in the key vault, including a) creating, modifying, or deleting keys or secrets, and b) signing, verifying, encrypting keys, etc.
- Unauthenticated requests such as requests that do not have a bearer token, are malformed or expired, or have an invalid token. > [!NOTE]
-> After creating one or more key vaults, customers can monitor how and when their key vaults are accessed and by whom.
+> With Azure Key Vault, customers can monitor how and when their key vaults and managed HSMs are accessed and by whom.
> > *Additional resources:* > - **[Configure monitoring and alerting for Azure Key Vault](../key-vault/general/alert.md)** > - **[Enable logging for Azure Key Vault](../key-vault/general/logging.md)**
-> - **[How to secure storage account for Azure Key Vault logs](../storage/blobs/security-recommendations.md)**
+> - **[How to secure storage account for Azure Key Vault logs](../storage/blobs/security-recommendations.md)**
-Customers can also use the [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) to review Azure Key Vault logs. To use this solution, customers need to enable logging of Azure Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it is not necessary to write logs to Azure Blob storage.
+Customers can also use the [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) to review Azure Key Vault logs. To use this solution, customers need to enable logging of Azure Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it is not necessary to write logs to Azure Blob storage.
-### Azure Dedicated HSM
-The HSMs behind Azure Key Vault are by default multi-tenant. For customers who require single-tenant HSMs, Microsoft provides [Azure Dedicated HSM](../dedicated-hsm/overview.md) that has FIPS 140-2 Level 3 validation (certificate [#3205](https://csrc.nist.gov/projects/cryptographic-module-validation-program/Certificate/3205)), as well as [Common Criteria](http://www.commoncriteriaportal.org/) EAL4+ certification and conformance with the [Electronic Identification Authentication and Trust Services](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2014.257.01.0073.01.ENG) (eIDAS) requirements. The underlying HSM devices support up to 10,000 transactions per second with RSA-2048 keys. Operating in FIPS mode imposes a minimum RSA key length of 2048 bits; however, the maximum supported RSA key length is 8192 bits.
+> [!NOTE]
+> For a comprehensive list of Azure Key Vault security recommendations, see the **[Security baseline for Azure Key Vault](../key-vault/general/security-baseline.md)**.
-Azure Dedicated HSM is most suitable for scenarios where customers require full administrative control and sole access to their HSM device for administrative purposes. Dedicated HSMs are provisioned directly on customerΓÇÖs Virtual Network (VNet) and can be used by applications running inside that VNet. After a device is provisioned, only the customer has administrative or application-level access to the device. Customers are responsible for the management of the device and they can get full activity logs directly from their devices.
+#### Vault
-Dedicated HSM can also connect to on-premises infrastructure via point-to-site or site-to-site Virtual Private Network (VPN). Listed below are the most common [customer requirements](../dedicated-hsm/faq.md#using-your-hsm) for Dedicated HSM:
+**[Vaults](../key-vault/general/overview.md)** provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](../key-vault/general/about-keys-secrets-certificates.md). They can be either software-protected (standard tier) or HSM-protected (premium tier). To see a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. Customers who require additional assurances can choose to safeguard their secrets, keys, and certificates in vaults protected by multi-tenant HSMs. The corresponding HSMs are validated according to the [FIPS 140-2 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating (certificate [#2643](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/Certificate/2643)), which includes requirements for physical tamper evidence and role-based authentication. These HSMs meet Security Level 3 rating for several areas, including physical security, electromagnetic interference / electromagnetic compatibility (EMI/EMC), design assurance, and roles, services, and authentication.
-- Migrating applications from on-premises to Azure Virtual Machines-- Customer security posture requires they manage all aspects of the HSM-- Need for HSM validated to FIPS 140-2 Level 3-- Proprietary HSM features that cannot be abstracted in the Azure Key Vault service
+Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where customers can control their own keys in HSMs and use them to encrypt data at rest for a wide range of Azure services. As mentioned previously, customers can [import or generate encryption keys](../key-vault/keys/hsm-protected-keys.md) in HSMs ensuring that keys never leave the HSM boundary to support bring your own key (BYOK) scenarios.
-Microsoft has no administrative control after the customer accesses the device for the first time, at which point the customer changes the password. **Microsoft does not have any access to the keys stored in customer allocated Dedicated HSM.** Microsoft maintains monitor-level access (which is not an admin role and can be disabled by the customer) for telemetry collection. This access covers hardware monitoring such as temperature, power supply health, and fan health. Customers who disable this monitoring service will no longer receive proactive health alerts from Microsoft.
+Azure Key Vault can handle requesting and renewing certificates in vaults, including Transport Layer Security (TLS) certificates, enabling customers to enroll and automatically renew certificates from supported public Certificate Authorities. Azure Key Vault certificates support provides for the management of customerΓÇÖs X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](../key-vault/certificates/create-certificate.md) through Azure Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
-> [!NOTE]
-> Microsoft provides detailed instructions for deploying Azure Dedicated HSM into an existing Virtual Network (VNet) using the **[Command Line Interface](../dedicated-hsm/tutorial-deploy-hsm-cli.md)** (CLI) and **[PowerShell](../dedicated-hsm/tutorial-deploy-hsm-powershell.md)**.
+When customers create a key vault in a resource group, they can [manage access](../key-vault/general/secure-your-key-vault.md) by using Azure AD, which enables customers to grant access at a specific scope level by assigning the appropriate Azure roles. For example, to grant access to a user to manage key vaults, customers can assign a predefined key vault Contributor role to the user at a specific scope, including subscription, resource group, or specific resource.
+
+> [!IMPORTANT]
+> Customers should control tightly who has Contributor role access to their key vaults. If a user has Contributor permissions to a key vault management plane, the user can gain access to the data plane by setting a key vault access policy.
+>
+> *Additional resources:*
+> - How to **[secure access to a key vault](../key-vault/general/secure-your-key-vault.md)**
+
+#### Managed HSM
+
+**[Managed HSM](../key-vault/managed-hsm/overview.md)** provides a single-tenant, fully managed, highly available, zone-resilient (where available) HSM as a service to store and manage your cryptographic keys. It is most suitable for applications and usage scenarios that handle high value keys. It also helps customers meet the most stringent security, compliance, and regulatory requirements. Managed HSM uses FIPS 140-2 Level 3 validated HSMs (certificate [#3718](https://csrc.nist.gov/projects/cryptographic-module-validation-program/Certificate/3718)) to protect your cryptographic keys. Each managed HSM pool is an isolated single-tenant instance with its own [security domain](../key-vault/managed-hsm/security-domain.md) controlled by the customer and isolated cryptographically from instances belonging to other customers. Cryptographic isolation relies on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that provides encrypted code and data to help ensure customer control.
+
+When a managed HSM is created, the requestor also provides a list of data plane administrators. Only these administrators are able to [access the managed HSM data plane](../key-vault/managed-hsm/access-control.md) to perform key operations and manage data plane role assignments (managed HSM local RBAC). The permission model for both the management and data planes uses the same syntax, but permissions are enforced at different levels, and role assignments use different scopes. Management plane Azure RBAC is enforced by Azure Resource Manager while data plane managed HSM local RBAC is enforced by the managed HSM itself.
+
+> [!IMPORTANT]
+> Unlike with key vaults, granting users management plane access to managed HSMs does not grant them any data plane access to keys or data plane role assignments managed HSM local RBAC. This isolation is by design to prevent inadvertent expansion of privileges affecting access to keys stored in managed HSMs.
+
+As mentioned previously, managed HSM supports [importing keys generated](../key-vault/managed-hsm/hsm-protected-keys-byok.md) in customerΓÇÖs on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as bring your own key (BYOK) scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](../azure-sql/database/transparent-data-encryption-byok-overview.md), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others.
+
+Managed HSM enables customers to use the established Azure Key Vault API and management interfaces. Customers can use the same application development and deployment patterns for all their applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM.
## Compute isolation
-Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that customer code ΓÇô whether itΓÇÖs deployed in a PaaS Worker Role or an IaaS Virtual Machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there is a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest Virtual Machines (VMs), as shown in Figure 4. Each node also has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
+Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that customer code ΓÇô whether itΓÇÖs deployed in a PaaS worker role or an IaaS virtual machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there is a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as shown in Figure 4. Each node has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
:::image type="content" source="./media/secure-isolation-fig4.png" alt-text="Isolation of Hypervisor, Root VM, and Guest VMs"::: **Figure 4.** Isolation of Hypervisor, Root VM, and Guest VMs
-Physical servers hosting VMs are grouped into clusters and they are independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads and it manages unidirectional communication from the Host to Virtual Machines. Dividing the compute infrastructure into clusters isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
+Physical servers hosting VMs are grouped into clusters and they are independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads and it manages unidirectional communication from the Host to virtual machines. Dividing the compute infrastructure into clusters isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
-The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines for customers and Azure cloud services. The Hypervisor/Host OS pairing leverages decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploit mitigation, and strong security assurance processes.
+The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines for customers and Azure cloud services. The Hypervisor/Host OS pairing leverages decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploit mitigation, and strong security assurance processes.
### Management network isolation There are three Virtual Local Area Networks (VLANs) in each compute hardware cluster, as shown in Figure 5:
There are three Virtual Local Area Networks (VLANs) in each compute hardware clu
- Fabric Controller (FC) VLAN that contains trusted FCs and supporting systems, and - Device VLAN that contains trusted network and other infrastructure devices.
-Communication is permitted from the FC VLAN to the main VLAN but cannot be initiated from the main VLAN to the FC VLAN. This bridge from the FC VLAN to the Main VLAN is used to reduce the overall complexity and improve reliability/resiliency of the network. The connection is secured in several ways to ensure that commands are trusted and successfully routed:
+Communication is permitted from the FC VLAN to the main VLAN but cannot be initiated from the main VLAN to the FC VLAN. This bridge from the FC VLAN to the Main VLAN is used to reduce the overall complexity and improve reliability/resiliency of the network. The connection is secured in several ways to ensure that commands are trusted and successfully routed:
-- Communication from an FC to a Fabric Agent (FA) is unidirectional and requires mutual authentication via certificates. The FA implements a TLS-protected service that only responds to requests from the FC. It cannot initiate connections to the FC or other privileged internal nodes.
+- Communication from an FC to a Fabric Agent (FA) is unidirectional and requires mutual authentication via certificates. The FA implements a TLS-protected service that only responds to requests from the FC. It cannot initiate connections to the FC or other privileged internal nodes.
- The FC treats responses from the agent service as if they were untrusted. Communication with the agent is further restricted to a set of authorized IP addresses using firewall rules on each physical node, and routing rules at the border gateways. - Throttling is used to ensure that customer VMs cannot saturate the network and management commands form being routed.
-Communication is also blocked from the main VLAN to the device VLAN. This way, even if a node running customer code is compromised, it cannot attack nodes on either the FC or device VLANs.
+Communication is also blocked from the main VLAN to the device VLAN. This way, even if a node running customer code is compromised, it cannot attack nodes on either the FC or device VLANs.
These controls ensure that the management consoles access to the Hypervisor is always valid and available. :::image type="content" source="./media/secure-isolation-fig5.png" alt-text="VLAN isolation"::: **Figure 5.** VLAN isolation
-The Hypervisor and the Host OS provide network packet filters so untrusted VMs cannot generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure endpoints, or send/receive inappropriate broadcast traffic. By default, traffic is blocked when a VM is created, and then the FC agent configures the packet filter to add rules and exceptions to allow authorized traffic. More detailed information about network traffic isolation and separation of tenant traffic is provided in *[Networking isolation](#networking-isolation)* section.
+The Hypervisor and the Host OS provide network packet filters so untrusted VMs cannot generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure endpoints, or send/receive inappropriate broadcast traffic. By default, traffic is blocked when a VM is created, and then the FC agent configures the packet filter to add rules and exceptions to allow authorized traffic. More detailed information about network traffic isolation and separation of tenant traffic is provided in *[Networking isolation](#networking-isolation)* section.
### Management console and management plane The Azure Management Console and Management Plane follow strict security architecture principles of least privilege to secure and isolate tenant processing: -- **Management Console (MC)** ΓÇô The MC in Azure Cloud is comprised of the Azure portal GUI and the Azure Resource Manager API layers. They both utilize user credentials to authenticate and authorized all operations. -- **Management Plane (MP)** ΓÇô This layer performs the actual management actions and is comprised of the Compute Resource Provider (CRP), Fabric Controller (FC), Fabric Agent (FA), and the underlying Hypervisor (which has its own Hypervisor Agent to service communication). These layers all utilize system contexts that are granted the least permissions needed to perform their operations.
+- **Management Console (MC)** ΓÇô The MC in Azure Cloud is comprised of the Azure portal GUI and the Azure Resource Manager API layers. They both utilize user credentials to authenticate and authorized all operations.
+- **Management Plane (MP)** ΓÇô This layer performs the actual management actions and is comprised of the Compute Resource Provider (CRP), Fabric Controller (FC), Fabric Agent (FA), and the underlying Hypervisor (which has its own Hypervisor Agent to service communication). These layers all utilize system contexts that are granted the least permissions needed to perform their operations.
-The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs comprise a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes (separate FCs manage compute and storage clusters). If a customer updates their applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC and the FC communicates with the FA.
+The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs comprise a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes (separate FCs manage compute and storage clusters). If a customer updates their applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC and the FC communicates with the FA.
CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling customers to create and manage virtual machine resources and extensions via simple templates.
-Communications among various components (e.g., Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent additional actions. Separate communications channels ensure that communications cannot bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure Cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/azuread-dev/v1-protocols-oauth-code.md).
+Communications among various components (e.g., Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent additional actions. Separate communications channels ensure that communications cannot bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/azuread-dev/v1-protocols-oauth-code.md).
:::image type="content" source="./media/secure-isolation-fig6.png" alt-text="Management Console and Management Plane interaction for secure management flow" border="false"::: **Figure 6.** Management Console and Management Plane interaction for secure management flow
-All management commands are authenticated via RSA signed certificate or JSON Web Token (JWT). Authentication and command channels are encrypted via Transport Layer Security (TLS) 1.2 as described in *[Data encryption in transit](#data-encryption-in-transit)* section. Server certificates are used to provide TLS connectivity to the authentication providers where a separate authorization mechanism is used, e.g., Azure Active Directory or datacenter Security Token Service (dSTS). dSTS is a token provider like Azure Active Directory that is isolated to the Microsoft datacenter and utilized for service level communications.
+All management commands are authenticated via RSA signed certificate or JSON Web Token (JWT). Authentication and command channels are encrypted via Transport Layer Security (TLS) 1.2 as described in *[Data encryption in transit](#data-encryption-in-transit)* section. Server certificates are used to provide TLS connectivity to the authentication providers where a separate authorization mechanism is used, e.g., Azure Active Directory or datacenter Security Token Service (dSTS). dSTS is a token provider like Azure Active Directory that is isolated to the Microsoft datacenter and utilized for service level communications.
-Figure 6 illustrates the management flow corresponding to a user command to stop a virtual machine. The steps enumerated in Table 1 apply to other management commands in the same way and utilize the same encryption and authentication flow.
+Figure 6 illustrates the management flow corresponding to a user command to stop a virtual machine. The steps enumerated in Table 1 apply to other management commands in the same way and utilize the same encryption and authentication flow.
**Table 1.** Management flow involving various MC and MP components
Figure 6 illustrates the management flow corresponding to a user command to stop
|**5.**|Azure Resource Manager sends request to CRP. Call is authenticated via OAuth using a JSON Web Token representing the Azure Resource Manager system identity from dSTS, thus transition from user to system context.|JSON Web Token (dSTS)|TLS 1.2| |**6.**|CRP validates the request and determines which fabric controller can complete the request. CRP requests a certificate from dSTS based on its client certificate so that it can connect to the specific Fabric Controller (FC) that is the target of the command. Token will grant permissions only to that specific FC if CRP is allowed to communicate to that FC.|Client Certificate|TLS 1.2| |**7.**|CRP then sends the request to the correct FC with the JSON Web Token that was created by dSTS.|JSON Web Token (dSTS)|TLS 1.2|
-|**8.**|FC then validates the command is allowed and comes from a trusted source. Then it establishes a secure TLS connection to the correct Fabric Agent (FA) in the cluster that can execute the command by using a certificate that is unique to the target FA and the FC. Once the secure connection is established the command is transmitted.|Mutual Certificate|TLS 1.2|
-|**9.**|The FA again validates the command is allowed and comes from a trusted source. Once validated, the FA will establish a secure connection using mutual certificate authentication and issue the command to the Hypervisor Agent that is only accessible by the FA.|Mutual Certificate|TLS 1.2|
+|**8.**|FC then validates the command is allowed and comes from a trusted source. Then it establishes a secure TLS connection to the correct Fabric Agent (FA) in the cluster that can execute the command by using a certificate that is unique to the target FA and the FC. Once the secure connection is established the command is transmitted.|Mutual Certificate|TLS 1.2|
+|**9.**|The FA again validates the command is allowed and comes from a trusted source. Once validated, the FA will establish a secure connection using mutual certificate authentication and issue the command to the Hypervisor Agent that is only accessible by the FA.|Mutual Certificate|TLS 1.2|
|**10.**|Hypervisor Agent on the host executes an internal call to stop the VM.|System Context|N.A.|
-Commands generated through all steps of the process identified in this section and sent to the FC and FA on each node, are written to a local audit log and distributed to multiple analytics systems for stream processing in order to monitor system health and track security events and patterns. Tracking includes events that were processed successfully, as well as events that were invalid. Invalid requests are processed by the intrusion detection systems to detect anomalies.
+Commands generated through all steps of the process identified in this section and sent to the FC and FA on each node, are written to a local audit log and distributed to multiple analytics systems for stream processing in order to monitor system health and track security events and patterns. Tracking includes events that were processed successfully, as well as events that were invalid. Invalid requests are processed by the intrusion detection systems to detect anomalies.
### Logical isolation implementation options Azure provides isolation of compute processing through a multi-layered approach, including:-- **Hypervisor isolation** for services that provide cryptographically certain isolation by using separate virtual machines and leveraging Azure Hypervisor isolation. Examples: *App Service, Azure Container Instances, Azure Databricks, Azure Functions, Azure Kubernetes Service, Azure Machine Learning, Cloud Services, Data Factory, Service Fabric, Virtual Machines, Virtual Machine Scale Sets.*-- **Drawbridge isolation** inside a VM for services that provide cryptographically certain isolation to workloads running on the same virtual machine by leveraging isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (library OS) inside a pico-process. A pico-process is a secured process with no direct access to services or resources of the Host system. Examples: *Automation, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, Azure Stream Analytics.*-- **User context-based isolation** for services that are comprised solely of Microsoft-controlled code and customer code is not allowed to run. Examples: *API Management, Application Gateway, Azure Active Directory, Azure Backup, Azure Cache for Redis, Azure DNS, Azure Information Protection, Azure IoT Hub, Azure Key Vault, Azure portal, Azure Monitor (including Log Analytics), Azure Security Center, Azure Site Recovery, Container Registry, Content Delivery Network, Event Grid, Event Hubs, Load Balancer, Service Bus, Storage, Virtual Network, VPN Gateway, Traffic Manager.*
+- **Hypervisor isolation** for services that provide cryptographically certain isolation by using separate virtual machines and leveraging Azure Hypervisor isolation. Examples: *App Service, Azure Container Instances, Azure Databricks, Azure Functions, Azure Kubernetes Service, Azure Machine Learning, Cloud Services, Data Factory, Service Fabric, Virtual Machines, Virtual Machine Scale Sets.*
+- **Drawbridge isolation** inside a VM for services that provide cryptographically certain isolation to workloads running on the same virtual machine by leveraging isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (library OS) inside a *pico-process*. A pico-process is a secured process with no direct access to services or resources of the Host system. Examples: *Automation, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, Azure Stream Analytics.*
+- **User context-based isolation** for services that are comprised solely of Microsoft-controlled code and customer code is not allowed to run. Examples: *API Management, Application Gateway, Azure Active Directory, Azure Backup, Azure Cache for Redis, Azure DNS, Azure Information Protection, Azure IoT Hub, Azure Key Vault, Azure portal, Azure Monitor (including Log Analytics), Azure Security Center, Azure Site Recovery, Container Registry, Content Delivery Network, Event Grid, Event Hubs, Load Balancer, Service Bus, Storage, Virtual Network, VPN Gateway, Traffic Manager.*
These logical isolation options are discussed in the rest of this section. #### Hypervisor isolation
-Hypervisor isolation in Azure is based on [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) technology, which enables Azure Hypervisor-based isolation to benefit from decades of Microsoft experience in operating system security and investments in Hyper-V technology for virtual machine isolation. Customers can review independent third-party assessment reports about Hyper-V security functions, including the [National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme (CCEVS) reports](https://www.niap-ccevs.org/Product/PCL.cfm?par303=Microsoft%20Corporation) such as the [report published in Aug-2019](https://www.commoncriteriaportal.org/files/epfiles/2019-22-INF-2839.pdf) that is discussed herein.
+Hypervisor isolation in Azure is based on [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) technology, which enables Azure Hypervisor-based isolation to benefit from decades of Microsoft experience in operating system security and investments in Hyper-V technology for virtual machine isolation. Customers can review independent third-party assessment reports about Hyper-V security functions, including the [National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme (CCEVS) reports](https://www.niap-ccevs.org/Product/PCL.cfm?par303=Microsoft%20Corporation) such as the [report published in Aug-2019](https://www.commoncriteriaportal.org/files/epfiles/2019-22-INF-2839.pdf) that is discussed herein.
-The Target of Evaluation (TOE) was composed of Windows 10 and Windows Server Standard and Datacenter Editions (version 1903, May 2019 update), including Windows Server 2016 and 2019 Hyper-V evaluation platforms (&#8220;Windows&#8221;). TOE enforces the following security policies as described in the report:
+The Target of Evaluation (TOE) was composed of Windows 10 and Windows Server Standard and Datacenter Editions (version 1903, May 2019 update), including Windows Server 2016 and 2019 Hyper-V evaluation platforms (&#8220;Windows&#8221;). TOE enforces the following security policies as described in the report:
-- **Security Audit** ΓÇô Windows has the ability to collect audit data, review audit logs, protect audit logs from overflow, and restrict access to audit logs. Audit information generated by the system includes the date and time of the event, the user identity that caused the event to be generated, and other event-specific data. Authorized administrators can review, search, and sort audit records. Authorized administrators can also configure the audit system to include or exclude potentially auditable events to be audited based on a wide range of characteristics. In the context of this evaluation, the protection profile requirements cover generating audit events, selecting which events should be audited, and providing secure storage for audit event entries.
+- **Security Audit** ΓÇô Windows has the ability to collect audit data, review audit logs, protect audit logs from overflow, and restrict access to audit logs. Audit information generated by the system includes the date and time of the event, the user identity that caused the event to be generated, and other event-specific data. Authorized administrators can review, search, and sort audit records. Authorized administrators can also configure the audit system to include or exclude potentially auditable events to be audited based on a wide range of characteristics. In the context of this evaluation, the protection profile requirements cover generating audit events, selecting which events should be audited, and providing secure storage for audit event entries.
- **Cryptographic Support** ΓÇô Windows provides FIPS 140-2 Cryptographic Algorithm Validation Program (CAVP) validated cryptographic functions that support encryption/decryption, cryptographic signatures, cryptographic hashing, cryptographic key agreement (which is not studied in this evaluation), and random number generation. The TOE additionally provides support for public keys, credential management, and certificate validation functions and provides support for the National Security AgencyΓÇÖs Suite B cryptographic algorithms. Windows also provides extensive auditing support of cryptographic operations, the ability to replace cryptographic functions and random number generators with alternative implementations, and a key isolation service designed to limit the potential exposure of secret and private keys. In addition to using cryptography for its own security functions, Windows offers access to the cryptographic support functions for user-mode and kernel-mode programs. Public key certificates generated and used by Windows authenticate users and machines as well as protect both user and system data in transit. - **User Data Protection** ΓÇô In the context of this evaluation Windows protects user data and provides virtual private networking capabilities. - **Identification and Authentication** ΓÇô Each Windows user must be identified and authenticated based on administrator-defined policy prior to performing any TSF-mediated functions. Windows maintains databases of accounts including their identities, authentication information, group associations, and privilege and logon rights associations. Windows account policy functions include the ability to define the minimum password length, the number of failed logon attempts, the duration of lockout, and password age.
The critical Hypervisor isolation is provided through:
- Defense-in-depth exploit mitigations - Strong security assurance processes
-These technologies are described in the rest of this section. **They enable Azure Hypervisor to offer strong security assurances for tenant separation in a multi-tenant cloud.**
+These technologies are described in the rest of this section. **They enable Azure Hypervisor to offer strong security assurances for tenant separation in a multi-tenant cloud.**
##### *Strongly defined security boundaries*
-Customer code executes in a Hypervisor VM and benefits from Hypervisor enforced security boundaries, as shown in Figure 7. Azure Hypervisor is based on [Microsoft Hyper-V](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture) technology. It divides an Azure node into a variable number of Guest VMs that have separate address spaces where they can load an operating system (OS) and applications operating in parallel to the Host OS that executes in the Root partition of the node.
+Customer code executes in a Hypervisor VM and benefits from Hypervisor enforced security boundaries, as shown in Figure 7. Azure Hypervisor is based on [Microsoft Hyper-V](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture) technology. It divides an Azure node into a variable number of Guest VMs that have separate address spaces where they can load an operating system (OS) and applications operating in parallel to the Host OS that executes in the Root partition of the node.
:::image type="content" source="./media/secure-isolation-fig7.png" alt-text="Compute isolation with Azure Hypervisor"::: **Figure 7.** Compute isolation with Azure Hypervisor (see online [glossary of terms](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture#glossary))
-The Azure Hypervisor acts like a micro-kernel, passing all hardware access requests from Guest VMs using a Virtualization Service Client (VSC) to the Host OS for processing by using a shared-memory interface called VMBus. The Host OS proxies the hardware requests using a Virtualization Service Provider (VSP) that prevents users from obtaining raw read/write/execute access to the system and mitigates the risk of sharing system resources. The privileged Root partition (also known as Host OS) has direct access to the physical devices/peripherals on the system (e.g., storage controllers, GPUs, networking adapters, etc.). The Host OS allows Guest partitions to share the use of these physical devices by exposing virtual devices to each Guest partition. Consequently, an operating system executing in a Guest partition has access to virtualized peripheral devices that are provided by VSPs executing in the Root partition. These virtual device representations can take one of three forms:
+The Azure Hypervisor acts like a micro-kernel, passing all hardware access requests from Guest VMs using a Virtualization Service Client (VSC) to the Host OS for processing by using a shared-memory interface called VMBus. The Host OS proxies the hardware requests using a Virtualization Service Provider (VSP) that prevents users from obtaining raw read/write/execute access to the system and mitigates the risk of sharing system resources. The privileged Root partition (also known as Host OS) has direct access to the physical devices/peripherals on the system (e.g., storage controllers, GPUs, networking adapters, etc.). The Host OS allows Guest partitions to share the use of these physical devices by exposing virtual devices to each Guest partition. Consequently, an operating system executing in a Guest partition has access to virtualized peripheral devices that are provided by VSPs executing in the Root partition. These virtual device representations can take one of three forms:
-- **Emulated devices** ΓÇô The Host OS may expose a virtual device with an interface identical to what would be provided by a corresponding physical device. In this case, an operating system in a Guest partition would use the same device drivers as it does when running on a physical system. The Host OS would emulate the behavior of a physical device to the Guest partition.-- **Para-virtualized devices** ΓÇô The Host OS may expose virtual devices with a virtualization-specific interface using the VMBus shared memory interface between the Host OS and the Guest. In this model, the Guest partition uses device drivers specifically designed to implement a virtualized interface. These para-virtualized devices are sometimes referred to as &#8220;synthetic&#8221; devices.-- **Hardware-accelerated devices** ΓÇô The Host OS may expose actual hardware peripherals directly to the Guest partition. This model allows for high I/O performance in a Guest partition, as the Guest partition can directly access hardware device resources without going through the Host OS. [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is an example of a hardware accelerated device. Isolation in this model is achieved using input-output memory management units (I/O MMUs) to provide address space and interrupt isolation between each partition.
+- **Emulated devices** ΓÇô The Host OS may expose a virtual device with an interface identical to what would be provided by a corresponding physical device. In this case, an operating system in a Guest partition would use the same device drivers as it does when running on a physical system. The Host OS would emulate the behavior of a physical device to the Guest partition.
+- **Para-virtualized devices** ΓÇô The Host OS may expose virtual devices with a virtualization-specific interface using the VMBus shared memory interface between the Host OS and the Guest. In this model, the Guest partition uses device drivers specifically designed to implement a virtualized interface. These para-virtualized devices are sometimes referred to as &#8220;synthetic&#8221; devices.
+- **Hardware-accelerated devices** ΓÇô The Host OS may expose actual hardware peripherals directly to the Guest partition. This model allows for high I/O performance in a Guest partition, as the Guest partition can directly access hardware device resources without going through the Host OS. [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is an example of a hardware accelerated device. Isolation in this model is achieved using input-output memory management units (I/O MMUs) to provide address space and interrupt isolation between each partition.
-Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce isolation between partitions. The following fundamental CPU capabilities provide the hardware building blocks for Hypervisor isolation:
+Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce isolation between partitions. The following fundamental CPU capabilities provide the hardware building blocks for Hypervisor isolation:
-- **Second-level address translation** ΓÇô the Hypervisor controls what memory resources a partition is allowed to access through the use of second-level page tables provided by the CPUΓÇÖs memory management unit (MMU). The CPUΓÇÖs MMU uses second-level address translation under Hypervisor control to enforce protection on memory accesses performed by:
+- **Second-level address translation** ΓÇô the Hypervisor controls what memory resources a partition is allowed to access through the use of second-level page tables provided by the CPUΓÇÖs memory management unit (MMU). The CPUΓÇÖs MMU uses second-level address translation under Hypervisor control to enforce protection on memory accesses performed by:
- CPU when running under the context of a partition.
- - I/O devices that are being accessed directly by Guest partitions.
-- **CPU context** ΓÇô the Hypervisor leverages virtualization extensions in the CPU to restrict privileges and CPU context that can be accessed while a Guest partition is running. The Hypervisor also uses these facilities to save and restore state when sharing CPUs between multiple partitions to ensure isolation of CPU state between the partitions.
+ - I/O devices that are being accessed directly by Guest partitions.
+- **CPU context** ΓÇô the Hypervisor leverages virtualization extensions in the CPU to restrict privileges and CPU context that can be accessed while a Guest partition is running. The Hypervisor also uses these facilities to save and restore state when sharing CPUs between multiple partitions to ensure isolation of CPU state between the partitions.
-The Azure Hypervisor makes extensive use of these processor facilities to provide isolation between partitions. The emergence of speculative side channel attacks has identified potential weaknesses in some of these processor isolation capabilities. In a multi-tenant architecture, any cross-VM attack across different tenants involves two steps: placing an adversary-controlled VM on the same Host as one of the victim VMs, and then breaching the logical isolation boundary to perform a side-channel attack. Azure provides protection from both threat vectors by using an advanced VM placement algorithm enforcing memory and process separation for logical isolation, as well as secure network traffic routing with cryptographic certainty at the Hypervisor. As discussed in section titled *[Exploitation of vulnerabilities in virtualization technologies](#exploitation-of-vulnerabilities-in-virtualization-technologies)* later in the article, the Azure Hypervisor has been architected to provide robust isolation within the hypervisor itself that helps mitigate a wide range of sophisticated side channel attacks.
+The Azure Hypervisor makes extensive use of these processor facilities to provide isolation between partitions. The emergence of speculative side channel attacks has identified potential weaknesses in some of these processor isolation capabilities. In a multi-tenant architecture, any cross-VM attack across different tenants involves two steps: placing an adversary-controlled VM on the same Host as one of the victim VMs, and then breaching the logical isolation boundary to perform a side-channel attack. Azure provides protection from both threat vectors by using an advanced VM placement algorithm enforcing memory and process separation for logical isolation, as well as secure network traffic routing with cryptographic certainty at the Hypervisor. As discussed in section titled *[Exploitation of vulnerabilities in virtualization technologies](#exploitation-of-vulnerabilities-in-virtualization-technologies)* later in the article, the Azure Hypervisor has been architected to provide robust isolation within the hypervisor itself that helps mitigate a wide range of sophisticated side channel attacks.
-The Azure Hypervisor defined security boundaries provide the base level isolation primitives for strong segmentation of code, data, and resource between potentially hostile multi-tenants on shared hardware. These isolation primitives are used to create multi-tenant resource isolation scenarios including:
+The Azure Hypervisor defined security boundaries provide the base level isolation primitives for strong segmentation of code, data, and resource between potentially hostile multi-tenants on shared hardware. These isolation primitives are used to create multi-tenant resource isolation scenarios including:
-- **Isolation of network traffic between potentially hostile guests** ΓÇô Virtual Networks (VNets) provide isolation of network traffic between tenants as part of their fundamental design, as described later in *[Separation of tenant network traffic](#separation-of-tenant-network-traffic)* section. VNet forms an isolation boundary where the VMs within a VNet can only communicate with each other. Any traffic destined to a VM from within the VNet or external senders without the proper policy configured will be dropped by the Host and not delivered to the VM.-- **Isolation for encryption keys and cryptographic material** ΓÇô Customers can further augment the isolation capabilities with the use of [hardware security managers or specialized key storage](../security/fundamentals/encryption-overview.md), e.g., storing encryption keys in FIPS 140-2 validated Hardware Security Modules via [Azure Key Vault](../key-vault/general/overview.md).
+- **Isolation of network traffic between potentially hostile guests** ΓÇô Virtual Network (VNet) provides isolation of network traffic between tenants as part of its fundamental design, as described later in *[Separation of tenant network traffic](#separation-of-tenant-network-traffic)* section. VNet forms an isolation boundary where the VMs within a VNet can only communicate with each other. Any traffic destined to a VM from within the VNet or external senders without the proper policy configured will be dropped by the Host and not delivered to the VM.
+- **Isolation for encryption keys and cryptographic material** ΓÇô Customers can further augment the isolation capabilities with the use of [hardware security managers or specialized key storage](../security/fundamentals/encryption-overview.md), e.g., storing encryption keys in FIPS 140-2 validated hardware security modules via [Azure Key Vault](../key-vault/general/overview.md).
- **Scheduling of system resources** ΓÇô Azure design includes guaranteed availability and segmentation of compute, memory, storage, and both direct and para-virtualized device access. The Azure Hypervisor meets the security objectives shown in Table 2.
The Azure Hypervisor meets the security objectives shown in Table 2.
|Objective|Source| |||
-|**Isolation**|The Azure Hypervisor security policy mandates no information transfer between VMs. This policy requires capabilities in the Virtual Machine Manager (VMM) and hardware for the isolation of memory, devices, networking, and managed resources such as persisted data.|
-|**VMM integrity**|Integrity is a core security objective for virtualization systems. To achieve system integrity, the integrity of each Hypervisor component is established and maintained. This objective concerns only the integrity of the Hypervisor itself, not the integrity of the physical platform or software running inside VMs.|
-|**Platform integrity**|The integrity of the Hypervisor depends on the integrity of the hardware and software on which it relies. Although the Hypervisor does not have direct control over the integrity of the platform, Azure relies on hardware and firmware mechanisms such as the [Cerberus](https://azure.microsoft.com/blog/microsoft-creates-industry-standards-for-datacenter-hardware-storage-and-security/) security microcontroller to [protect the underlying platform integrity](https://www.youtube.com/watch?v=oUvKEw8OchI), thereby preventing the VMM and Guests from running should platform integrity be compromised.|
+|**Isolation**|The Azure Hypervisor security policy mandates no information transfer between VMs. This policy requires capabilities in the Virtual Machine Manager (VMM) and hardware for the isolation of memory, devices, networking, and managed resources such as persisted data.|
+|**VMM integrity**|Integrity is a core security objective for virtualization systems. To achieve system integrity, the integrity of each Hypervisor component is established and maintained. This objective concerns only the integrity of the Hypervisor itself, not the integrity of the physical platform or software running inside VMs.|
+|**Platform integrity**|The integrity of the Hypervisor depends on the integrity of the hardware and software on which it relies. Although the Hypervisor does not have direct control over the integrity of the platform, Azure relies on hardware and firmware mechanisms such as the [Cerberus](https://azure.microsoft.com/blog/microsoft-creates-industry-standards-for-datacenter-hardware-storage-and-security/) security microcontroller to [protect the underlying platform integrity](https://www.youtube.com/watch?v=oUvKEw8OchI), thereby preventing the VMM and Guests from running should platform integrity be compromised.|
|**Management access**|Management functions are exercised only by authorized administrators, connected over secure connections with a principle of least privilege enforced by fine grained role access control mechanism.| |**Audit**|Azure provides audit capability to capture and protect system data so that it can later be inspected.| ##### *Defense-in-depth exploit mitigations*
-To further mitigate the risk of a security compromise, Microsoft has invested in numerous defense-in-depth mitigations in Azure systems software, hardware, and firmware to provide strong real-world isolation guarantees to Azure customers. As mentioned previously, Azure Hypervisor isolation is based on [Microsoft Hyper-V](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture) technology, which enables Azure Hypervisor to benefit from decades of Microsoft experience in operating system security and investments in Hyper-V technology for virtual machine isolation.
+To further mitigate the risk of a security compromise, Microsoft has invested in numerous defense-in-depth mitigations in Azure systems software, hardware, and firmware to provide strong real-world isolation guarantees to Azure customers. As mentioned previously, Azure Hypervisor isolation is based on [Microsoft Hyper-V](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture) technology, which enables Azure Hypervisor to benefit from decades of Microsoft experience in operating system security and investments in Hyper-V technology for virtual machine isolation.
Listed below are some key design principles adopted by Microsoft to secure Hyper-V:
Listed below are some key design principles adopted by Microsoft to secure Hyper
- Many components use [smart pointers](/cpp/cpp/smart-pointers-modern-cpp) to eliminate the risk of [use-after-free](https://owasp.org/www-community/vulnerabilities/Using_freed_memory) bugs. - Most Hyper-V kernel-mode code uses a heap allocator that zeros on allocation to eliminate uninitialized memory bugs. - Eliminate common vulnerability classes with compiler mitigations
- - All Hyper-V code is compiled with InitAll which [eliminates uninitialized stack variables](https://msrc-blog.microsoft.com/2020/05/13/solving-uninitialized-stack-memory-on-windows/). This approach was implemented because many historical vulnerabilities in Hyper-V were caused by uninitialized stack variables.
+ - All Hyper-V code is compiled with InitAll which [eliminates uninitialized stack variables](https://msrc-blog.microsoft.com/2020/05/13/solving-uninitialized-stack-memory-on-windows/). This approach was implemented because many historical vulnerabilities in Hyper-V were caused by uninitialized stack variables.
- All Hyper-V code is compiled with [stack canaries](https://en.wikipedia.org/wiki/Stack_buffer_overflow#Stack_canaries) to dramatically reduce the risk of stack overflow vulnerabilities. - Find issues that make their way into the product - All Windows code has a set of static analysis rules run across it.
- - All Hyper-V code is code reviewed and fuzzed. For more information on fuzzing, see *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article.
+ - All Hyper-V code is code reviewed and fuzzed. For more information on fuzzing, see *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article.
- Make exploitation of remaining vulnerabilities more difficult
- - The VM Worker Process has the following mitigations applied:
+ - The VM worker process has the following mitigations applied:
- [Arbitrary Code Guard](https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) ΓÇô Dynamically generated code cannot be loaded in the VM Worker process. - [Code Integrity Guard](https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) ΓÇô Only Microsoft signed code can be loaded in the VM Worker Process. - [Control Flow Guard (CFG)](/windows/win32/secbp/control-flow-guard) ΓÇô Provides course grained control flow protection to indirect calls and jumps.
Listed below are some key design principles adopted by Microsoft to secure Hyper
- [Address Space Layout Randomization (ASLR)](https://en.wikipedia.org/wiki/Address_space_layout_randomization) ΓÇô Randomizes the layout of heaps, stacks, binaries, and other data structures in the address space to make exploitation less reliable. - [Data Execution Prevention (DEP/NX)](/windows/win32/win7appqual/dep-nx-protection) ΓÇô Only pages of memory intended to contain code are executable.
-Microsoft investments in Hyper-V security benefit Azure Hypervisor directly. The goal of defense-in-depth mitigations is to make weaponized exploitation of a vulnerability as expensive as possible for an attacker, limiting their impact and maximizing the window for detection. All exploit mitigations are evaluated for effectiveness by a thorough security review of the Azure Hypervisor attack surface using methods that adversaries may employ. Table 3 outlines some of the mitigations intended to protect the Hypervisor isolation boundaries and hardware host integrity.
+Microsoft investments in Hyper-V security benefit Azure Hypervisor directly. The goal of defense-in-depth mitigations is to make weaponized exploitation of a vulnerability as expensive as possible for an attacker, limiting their impact and maximizing the window for detection. All exploit mitigations are evaluated for effectiveness by a thorough security review of the Azure Hypervisor attack surface using methods that adversaries may employ. Table 3 outlines some of the mitigations intended to protect the Hypervisor isolation boundaries and hardware host integrity.
**Table 3.** Azure Hypervisor defense-in-depth
Microsoft investments in Hyper-V security benefit Azure Hypervisor directly. Th
|**Hardware root-of-trust with platform secure boot**|Ensures host only boots exact firmware and OS image required|Windows [secure boot](/windows-hardware/design/device-experiences/oem-secure-boot) validates that Azure Hypervisor infrastructure is only bootable in a known good configuration, aligned to Azure firmware, hardware, and kernel production versions.| |**Reduced attack surface VMM**|Protects against escalation of privileges in VMM user functions|The Azure Hypervisor Virtual Machine Manager (VMM) contains both user and kernel mode components. User mode components are isolated to prevent break-out into kernel mode functions in addition to numerous layered mitigations.|
-Moreover, Azure has adopted an assume-breach security strategy implemented via [Red Teaming](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e). This approach relies on a dedicated team of security researchers and engineers who conduct continuous ongoing testing of Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the Azure infrastructure and platform engineering or operations teams. This approach tests security detection and response capabilities and helps identify production vulnerabilities in Azure Hypervisor and other systems, including configuration errors, invalid assumptions, or other security issues in a controlled manner. Microsoft invests heavily in these innovative security measures for continuous Azure threat mitigation.
+Moreover, Azure has adopted an assume-breach security strategy implemented via [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf). This approach relies on a dedicated team of security researchers and engineers who conduct continuous ongoing testing of Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the Azure infrastructure and platform engineering or operations teams. This approach tests security detection and response capabilities and helps identify production vulnerabilities in Azure Hypervisor and other systems, including configuration errors, invalid assumptions, or other security issues in a controlled manner. Microsoft invests heavily in these innovative security measures for continuous Azure threat mitigation.
##### *Strong security assurance processes*
-The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.com/2018/12/10/first-steps-in-hyper-v-research/). It has been the subject of [ongoing research](https://msrc-blog.microsoft.com/2019/09/11/attacking-the-vm-worker-process/) and thorough security reviews. Microsoft has been transparent about the Hyper-V attack surface and underlying security architecture as demonstrated during a public [presentation at a Black Hat conference](https://github.com/Microsoft/MSRC-Security-Research/blob/master/presentations/2018_08_BlackHatUSA/A%20Dive%20in%20to%20Hyper-V%20Architecture%20and%20Vulnerabilities.pdf) in 2018. Microsoft stands behind the robustness and quality of Hyper-V isolation with a [$250,000 bug bounty program](https://www.microsoft.com/msrc/bounty-hyper-v) for critical Remote Code Execution (RCE), information disclosure, and Denial of Service (DOS) vulnerabilities reported in Hyper-V. By leveraging the same Hyper-V technology in Windows Server and Azure cloud platform, the publicly available documentation and bug bounty program ensure that security improvements will accrue to all users of Microsoft products and services. Table 4 summarizes the key attack surface points from the Black Hat presentation.
+The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.com/2018/12/10/first-steps-in-hyper-v-research/). It has been the subject of [ongoing research](https://msrc-blog.microsoft.com/2019/09/11/attacking-the-vm-worker-process/) and thorough security reviews. Microsoft has been transparent about the Hyper-V attack surface and underlying security architecture as demonstrated during a public [presentation at a Black Hat conference](https://github.com/Microsoft/MSRC-Security-Research/blob/master/presentations/2018_08_BlackHatUSA/A%20Dive%20in%20to%20Hyper-V%20Architecture%20and%20Vulnerabilities.pdf) in 2018. Microsoft stands behind the robustness and quality of Hyper-V isolation with a [$250,000 bug bounty program](https://www.microsoft.com/msrc/bounty-hyper-v) for critical Remote Code Execution (RCE), information disclosure, and Denial of Service (DOS) vulnerabilities reported in Hyper-V. By leveraging the same Hyper-V technology in Windows Server and Azure cloud platform, the publicly available documentation and bug bounty program ensure that security improvements will accrue to all users of Microsoft products and services. Table 4 summarizes the key attack surface points from the Black Hat presentation.
**Table 4.** Hyper-V attack surface details
The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.c
|**Host partition kernel-mode components**|System in kernel mode: full system compromise with the ability to compromise other Guests|- Virtual Infrastructure Driver (VID) intercept handling </br>- Kernel-mode client library </br>- Virtual Machine Bus (VMBus) channel messages </br>- Storage Virtualization Service Provider (VSP) </br>- Network VSP </br>- Virtual Hard Disk (VHD) parser </br>- Azure Networking Virtual Filtering Platform (VFP) and Virtual Network (VNet)| |**Host partition user-mode components**|Worker process in user mode: limited compromise with ability to attack Host and elevate privileges|- Virtual devices (VDEVs)|
-To protect these attack surfaces, Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee. As described in *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article, the approach includes purpose-built fuzzing, penetration testing, security development lifecycle, mandatory security training, security reviews, security intrusion detection based on Guest ΓÇô Host threat indicators, and automated build alerting of changes to the attack surface area. This mature multi-dimensional assurance process helps augment the isolation guarantees provided by the Azure Hypervisor by mitigating the risk of security vulnerabilities.
+To protect these attack surfaces, Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee. As described in *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article, the approach includes purpose-built fuzzing, penetration testing, security development lifecycle, mandatory security training, security reviews, security intrusion detection based on Guest ΓÇô Host threat indicators, and automated build alerting of changes to the attack surface area. This mature multi-dimensional assurance process helps augment the isolation guarantees provided by the Azure Hypervisor by mitigating the risk of security vulnerabilities.
> [!NOTE]
-> Azure has adopted an industry leading approach to ensure Hypervisor-based tenant separation that has been strengthened and improved over two decades of Microsoft investments in Hyper-V technology for virtual machine isolation. The outcome of this approach is a robust Hypervisor that helps ensure tenant separation via 1) strongly defined security boundaries, 2) defense-in-depth exploit mitigations, and 3) strong security assurances processes.
+> Azure has adopted an industry leading approach to ensure Hypervisor-based tenant separation that has been strengthened and improved over two decades of Microsoft investments in Hyper-V technology for virtual machine isolation. The outcome of this approach is a robust Hypervisor that helps ensure tenant separation via 1) strongly defined security boundaries, 2) defense-in-depth exploit mitigations, and 3) strong security assurances processes.
#### Drawbridge isolation
-For services that provide small units of processing using customer code, requests from multiple tenants are executed within a single VM and isolated using Microsoft [Drawbridge](https://www.microsoft.com/research/project/drawbridge/) technology. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (Library OS) inside a *pico-process*. A pico-process is a lightweight, secure isolation container with minimal kernel API surface and no direct access to services or resources of the Host system. The only external calls the pico-process can make are to the Drawbridge Security Monitor through the Drawbridge Application Binary Interface (ABI), as shown in Figure 8.
+For services that provide small units of processing using customer code, requests from multiple tenants are executed within a single VM and isolated using Microsoft [Drawbridge](https://www.microsoft.com/research/project/drawbridge/) technology. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (Library OS) inside a *pico-process*. A pico-process is a lightweight, secure isolation container with minimal kernel API surface and no direct access to services or resources of the Host system. The only external calls the pico-process can make are to the Drawbridge Security Monitor through the Drawbridge Application Binary Interface (ABI), as shown in Figure 8.
:::image type="content" source="./media/secure-isolation-fig8.png" alt-text="Process isolation using Drawbridge"::: **Figure 8.** Process isolation using Drawbridge
-The Security Monitor is divided into a system device driver and a user-mode component. The ABI is the interface between the Library OS and the Host. The entire interface consists of a closed set of fewer than 50 stateless function calls:
+The Security Monitor is divided into a system device driver and a user-mode component. The ABI is the interface between the Library OS and the Host. The entire interface consists of a closed set of fewer than 50 stateless function calls:
- Down calls from the pico-process to the Host OS support abstractions such as threads, virtual memory, and I/O streams. - Up calls into the pico-process perform initialization, return exception information, and run in a new thread.
-The semantics of the interface are fixed and support the general abstractions that applications require from any operating system. This design enables the Library OS and the Host to evolve separately.
+The semantics of the interface are fixed and support the general abstractions that applications require from any operating system. This design enables the Library OS and the Host to evolve separately.
The ABI is implemented within two components: - The Platform Adaptation Layer (PAL) runs as part of the pico-process. - The host implementation runs as part of the Host.
-Pico-processes are grouped into isolation units called *sandboxes*. The sandbox defines the applications, file system, and external resources available to the pico-processes. When a process running inside a pico-process creates a new child process, it is run with its own Library OS in a separate pico-process inside the same sandbox. Each sandbox communicates to the Security Monitor and is not able to communicate with other sandboxes except via allowed I/O channels (sockets, named pipes etc.), which need to be explicitly allowed by the configuration given the default opt-in approach depending on service needs. The outcome is that code running inside a pico-process can only access its own resources and cannot directly attack the Host system or any colocated sandboxes. It is only able to affect objects inside its own sandbox.
+Pico-processes are grouped into isolation units called *sandboxes*. The sandbox defines the applications, file system, and external resources available to the pico-processes. When a process running inside a pico-process creates a new child process, it is run with its own Library OS in a separate pico-process inside the same sandbox. Each sandbox communicates to the Security Monitor and is not able to communicate with other sandboxes except via allowed I/O channels (sockets, named pipes etc.), which need to be explicitly allowed by the configuration given the default opt-in approach depending on service needs. The outcome is that code running inside a pico-process can only access its own resources and cannot directly attack the Host system or any colocated sandboxes. It is only able to affect objects inside its own sandbox.
-When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process cannot abuse the resources from the Host machine.
+When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process cannot abuse the resources from the Host machine.
-In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a non-isolated process on Windows, it is substantially faster than booting a VM while accomplishing logical isolation.
+In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a non-isolated process on Windows, it is substantially faster than booting a VM while accomplishing logical isolation.
-A normal Windows process can call more than 1200 functions that result in access to the Windows kernel; however, the entire interface for a pico-process consists of fewer than 50 calls down to the Host. Most application requests for operating system services are handled by the Library OS within the address space of the pico-process. By providing a significantly smaller interface to the kernel, Drawbridge creates a more secure and isolated operating environment in which applications are much less vulnerable to changes in the Host system and incompatibilities introduced by new OS releases. More importantly, a Drawbridge pico-process is a strongly isolated container within which untrusted code from even the most malicious sources can be run without risk of compromising the Host system. The Host assumes that no code running within the pico-process can be trusted. The Host validates all requests from the pico-process with security checks.
+A normal Windows process can call more than 1200 functions that result in access to the Windows kernel; however, the entire interface for a pico-process consists of fewer than 50 calls down to the Host. Most application requests for operating system services are handled by the Library OS within the address space of the pico-process. By providing a significantly smaller interface to the kernel, Drawbridge creates a more secure and isolated operating environment in which applications are much less vulnerable to changes in the Host system and incompatibilities introduced by new OS releases. More importantly, a Drawbridge pico-process is a strongly isolated container within which untrusted code from even the most malicious sources can be run without risk of compromising the Host system. The Host assumes that no code running within the pico-process can be trusted. The Host validates all requests from the pico-process with security checks.
-Like a Virtual Machine, the pico-process is much easier to secure than a traditional OS interface because it is significantly smaller, stateless, and has fixed and easily described semantics. Another added benefit of the small ABI / driver syscall interface is the ability to audit / fuzz the driver code with little effort. For example, syscall fuzzers can fuzz the ABI with high coverage numbers in a relatively short amount of time.
+Like a virtual machine, the pico-process is much easier to secure than a traditional OS interface because it is significantly smaller, stateless, and has fixed and easily described semantics. Another added benefit of the small ABI / driver syscall interface is the ability to audit / fuzz the driver code with little effort. For example, syscall fuzzers can fuzz the ABI with high coverage numbers in a relatively short amount of time.
#### User context-based isolation
-In cases where an Azure service is comprised of Microsoft-controlled code and customer code is not allowed to run, the isolation is provided by a user context. These services accept only user configuration inputs and data for processing ΓÇô arbitrary code is not allowed. For these services, a user context is provided to establish the data that can be accessed and what Azure role-based access control (Azure RBAC) operations are allowed. This context is established by Azure Active Directory (Azure AD) as described earlier in *[Identity-based isolation](#identity-based-isolation)* section. Once the user has been identified and authorized, the Azure service creates an application user context that is attached to the request as it moves through execution, providing assurance that user operations are separated and properly isolated.
+In cases where an Azure service is comprised of Microsoft-controlled code and customer code is not allowed to run, the isolation is provided by a user context. These services accept only user configuration inputs and data for processing ΓÇô arbitrary code is not allowed. For these services, a user context is provided to establish the data that can be accessed and what Azure role-based access control (Azure RBAC) operations are allowed. This context is established by Azure Active Directory (Azure AD) as described earlier in *[Identity-based isolation](#identity-based-isolation)* section. Once the user has been identified and authorized, the Azure service creates an application user context that is attached to the request as it moves through execution, providing assurance that user operations are separated and properly isolated.
### Physical isolation In addition to robust logical compute isolation available by design to all Azure tenants, customers who desire physical compute isolation can utilize Azure Dedicated Host or Isolated Virtual Machines, which are both dedicated to a single customer. #### Azure Dedicated Host
-[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. Customers can provision dedicated hosts within a region, availability zone, and fault domain. They can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](https://azure.microsoft.com/services/virtual-machines/sql-server/) VMs directly into provisioned hosts using whatever configuration best meets their needs. Dedicated Host provides hardware isolation at the physical server level, enabling customers to place their Azure VMs on an isolated and dedicated physical server that runs only their organizationΓÇÖs workloads to meet corporate compliance requirements.
+[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. Customers can provision dedicated hosts within a region, availability zone, and fault domain. They can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](https://azure.microsoft.com/services/virtual-machines/sql-server/) VMs directly into provisioned hosts using whatever configuration best meets their needs. Dedicated Host provides hardware isolation at the physical server level, enabling customers to place their Azure VMs on an isolated and dedicated physical server that runs only their organizationΓÇÖs workloads to meet corporate compliance requirements.
> [!NOTE] > Customers can deploy a dedicated host using the **[Azure portal](../virtual-machines/dedicated-hosts-portal.md)**, Azure **[PowerShell](../virtual-machines/windows/dedicated-hosts-powershell.md)**, and Azure **[Command-Line Interface](../virtual-machines/linux/dedicated-hosts-cli.md)** (CLI).
-Customers can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and additional features. Dedicated Host enables control over platform maintenance events by allowing customers to opt in to a maintenance window to reduce potential impact to their provisioned services. Most maintenance events have little to no impact on customer VMs; however, customers in highly regulated industries or with sensitive workloads may want to have control over any potential maintenance impact.
+Customers can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and additional features. Dedicated Host enables control over platform maintenance events by allowing customers to opt in to a maintenance window to reduce potential impact to their provisioned services. Most maintenance events have little to no impact on customer VMs; however, customers in highly regulated industries or with sensitive workloads may want to have control over any potential maintenance impact.
> [!NOTE]
-> Microsoft provides detailed customer guidance on **[Windows](../virtual-machines/windows/quick-create-portal.md)** and **[Linux](../virtual-machines/linux/quick-create-cli.md)** Azure Virtual Machine provisioning using the Azure portal, Azure PowerShell, and Azure CLI.
+> Microsoft provides detailed customer guidance on **[Windows](../virtual-machines/windows/quick-create-portal.md)** and **[Linux](../virtual-machines/linux/quick-create-portal.md)** Azure Virtual Machine provisioning using the Azure portal, Azure PowerShell, and Azure CLI.
Table 5 summarizes available security guidance for customer virtual machines provisioned in Azure.
Table 5 summarizes available security guidance for customer virtual machines pro
|**Linux**|[Secure policies](../virtual-machines/security-policy.md)|[Azure Disk Encryption](../virtual-machines/linux/disk-encryption-overview.md)|[Built-in security controls](../virtual-machines/linux/security-baseline.md)|[Security recommendations](../virtual-machines/security-recommendations.md)| #### Isolated Virtual Machines
-Azure Compute offers Virtual Machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These Virtual Machine instances allow customer workloads to be deployed on dedicated physical servers. Utilizing Isolated VMs essentially guarantees that a customer VM will be the only one running on that specific server node. Customers can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](https://azure.microsoft.com/blog/nested-virtualization-in-azure/).
+Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These VM instances allow customer workloads to be deployed on dedicated physical servers. Utilizing Isolated VMs essentially guarantees that a customer VM will be the only one running on that specific server node. Customers can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](https://azure.microsoft.com/blog/nested-virtualization-in-azure/).
## Networking isolation
-The logical isolation of customer infrastructure in a public cloud is [fundamental to maintaining security](https://azure.microsoft.com/resources/azure-network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that each customerΓÇÖs private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet cannot communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between customer VMs remains private within a VNet. Customers can connect their VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on their connectivity options, including bandwidth, latency, and encryption requirements.
+The logical isolation of customer infrastructure in a public cloud is [fundamental to maintaining security](https://azure.microsoft.com/resources/azure-network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that each customerΓÇÖs private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet cannot communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between customer VMs remains private within a VNet. Customers can connect their VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on their connectivity options, including bandwidth, latency, and encryption requirements.
This section describes how Azure provides isolation of network traffic among tenants and enforces that isolation with cryptographic certainty. ### Separation of tenant network traffic
-Virtual networks (VNets) provide isolation of network traffic between tenants as part of their fundamental design. A customer subscription can contain multiple logically isolated private networks, and include firewall, load balancing, and network address translation. Each VNet is isolated from other VNets by default. Multiple deployments inside the same subscription can be placed on the same VNet, and then communicate with each other through private IP addresses.
+Virtual networks (VNets) provide isolation of network traffic between tenants as part of their fundamental design. A customer subscription can contain multiple logically isolated private networks, and include firewall, load balancing, and network address translation. Each VNet is isolated from other VNets by default. Multiple deployments inside the same subscription can be placed on the same VNet, and then communicate with each other through private IP addresses.
-Network access to VMs is limited by packet filtering at the network edge, at load balancers, and at the Host OS level. Customers can additionally configure their host firewalls to further limit connectivity, specifying for each listening port whether connections are accepted from the Internet or only from role instances within the same cloud service or VNet.
+Network access to VMs is limited by packet filtering at the network edge, at load balancers, and at the Host OS level. Customers can additionally configure their host firewalls to further limit connectivity, specifying for each listening port whether connections are accepted from the Internet or only from role instances within the same cloud service or VNet.
Azure provides network isolation for each deployment and enforces the following rules: - Traffic between VMs always traverses through trusted packet filters. - Protocols such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and other OSI Layer-2 traffic from a VM are controlled using rate-limiting and anti-spoofing protection. - VMs cannot capture any traffic on the network that is not intended for them.-- Customer VMs cannot send traffic to Azure private interfaces and infrastructure services, or to other customersΓÇÖ VMs. Customer VMs can only communicate with other VMs owned or controlled by the same customer and with Azure infrastructure service endpoints meant for public communications.-- When customers put VMs on a VNet, those VMs get their own address spaces that are invisible, and hence, not reachable from VMs outside of a deployment or virtual network (unless configured to be visible via public IP addresses). Customer environments are open only through the ports that customers specify for public access; if the VM is defined to have a public IP address, then all ports are open for public access.
+- Customer VMs cannot send traffic to Azure private interfaces and infrastructure services, or to other customersΓÇÖ VMs. Customer VMs can only communicate with other VMs owned or controlled by the same customer and with Azure infrastructure service endpoints meant for public communications.
+- When customers put VMs on a VNet, those VMs get their own address spaces that are invisible, and hence, not reachable from VMs outside of a deployment or virtual network (unless configured to be visible via public IP addresses). Customer environments are open only through the ports that customers specify for public access; if the VM is defined to have a public IP address, then all ports are open for public access.
#### Packet flow and network path protection
-AzureΓÇÖs hyperscale network is designed to provide uniform high capacity between servers, performance isolation between services (including customers), and Ethernet Layer-2 semantics. Azure uses a number of networking implementations to achieve these goals: flat addressing to allow service instances to be placed anywhere in the network; load balancing to spread traffic uniformly across network paths; and end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
+AzureΓÇÖs hyperscale network is designed to provide uniform high capacity between servers, performance isolation between services (including customers), and Ethernet Layer-2 semantics. Azure uses a number of networking implementations to achieve these goals: flat addressing to allow service instances to be placed anywhere in the network; load balancing to spread traffic uniformly across network paths; and end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch ΓÇô a Virtual Layer 2 (VL2) ΓÇô and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it is not possible that the traffic of one service could be affected by the traffic of any other service, as if each service were connected by a separate physical switch.
This section explains how packets flow through the Azure network, and how the to
The Azure network uses [two different IP-address families](/windows-server/networking/sdn/technologies/hyper-v-network-virtualization/hyperv-network-virtualization-technical-details-windows-server#packet-encapsulation): -- **Customer address (CA)** is the customer defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, as well as forward packets encapsulated with CAs along shortest paths.-- **Provider address (PA)** is the Azure assigned internal fabric address that is not visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that is not externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their serversΓÇÖ locations change due to virtual-machine migration or reprovisioning.
+- **Customer address (CA)** is the customer defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, as well as forward packets encapsulated with CAs along shortest paths.
+- **Provider address (PA)** is the Azure assigned internal fabric address that is not visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that is not externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their serversΓÇÖ locations change due to virtual-machine migration or reprovisioning.
-Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory systemΓÇÖs resolution service to learn the actual location of the destination and then tunnels the original packet there.
+Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory systemΓÇÖs resolution service to learn the actual location of the destination and then tunnels the original packet there.
-Azure assigns servers IP addresses that act as names alone, with no topological significance. AzureΓÇÖs VL2 addressing scheme separates these server names (PAs) from their locations (CAs). The crux of offering Layer-2 semantics is having servers believe they share a single large IP subnet ΓÇô i.e., the entire PA space ΓÇô with other servers in the same service, while eliminating the Address Resolution Protocol (ARP) and Dynamic Host Configuration Protocol (DHCP) scaling bottlenecks that plague large Ethernet deployments.
+Azure assigns servers IP addresses that act as names alone, with no topological significance. AzureΓÇÖs VL2 addressing scheme separates these server names (PAs) from their locations (CAs). The crux of offering Layer-2 semantics is having servers believe they share a single large IP subnet ΓÇô i.e., the entire PA space ΓÇô with other servers in the same service, while eliminating the Address Resolution Protocol (ARP) and Dynamic Host Configuration Protocol (DHCP) scaling bottlenecks that plague large Ethernet deployments.
-Figure 9 depicts a sample packet flow where sender S sends packets to destination D via a randomly chosen intermediate switch using IP-in-IP encapsulation. PAs are from 20/8, and CAs are from 10/8. H(ft) denotes a hash of the [5-tuple](https://www.techopedia.com/definition/28190/5-tuple), which is comprised of source IP, source port, destination IP, destination port, and protocol type. The ToR translates the PA to the CA, sends to the Intermediate switch, which sends to the destination CA ToR switch, which translates to the destination PA.
+Figure 9 depicts a sample packet flow where sender S sends packets to destination D via a randomly chosen intermediate switch using IP-in-IP encapsulation. PAs are from 20/8, and CAs are from 10/8. H(ft) denotes a hash of the [5-tuple](https://www.techopedia.com/definition/28190/5-tuple), which is comprised of source IP, source port, destination IP, destination port, and protocol type. The ToR translates the PA to the CA, sends to the Intermediate switch, which sends to the destination CA ToR switch, which translates to the destination PA.
:::image type="content" source="./media/secure-isolation-fig9.png" alt-text="Sample packet flow"::: **Figure 9.** Sample packet flow
-A server cannot send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can **enforce fine-grained isolation policies**. For example, it could enforce the policy that only servers belonging to the same service can communicate with each other.
+A server cannot send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can **enforce fine-grained isolation policies**. For example, it could enforce a policy that only servers belonging to the same service can communicate with each other.
#### Traffic flow patterns
-To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (i.e., the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToRΓÇÖs CA, decapsulated again, and finally sent to the destination. This approach is depicted in Figure 10 using two possible traffic patterns: 1) external traffic (orange line) traversing over ExpressRoute or the Internet to a VNet, and 2) internal traffic (blue line) between two VNets. Both traffic flows follow a similar pattern to isolate and protect network traffic.
+To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (i.e., the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToRΓÇÖs CA, decapsulated again, and finally sent to the destination. This approach is depicted in Figure 10 using two possible traffic patterns: 1) external traffic (orange line) traversing over ExpressRoute or the Internet to a VNet, and 2) internal traffic (blue line) between two VNets. Both traffic flows follow a similar pattern to isolate and protect network traffic.
:::image type="content" source="./media/secure-isolation-fig10.png" alt-text="Separation of tenant network traffic using VNets"::: **Figure 10.** Separation of tenant network traffic using VNets
-**External traffic (orange line)** ΓÇô For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When a customer places a public IP on their VNet gateway, traffic from the public Internet or customer on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when a customer establishes private peering over an ExpressRoute connection, it is connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of customerΓÇÖs on-premises network. A cryptographically protected [IPsec/IKE tunnel](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) is established between Azure and customerΓÇÖs internal network (e.g., via [Azure VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md) or [Azure ExpressRoute Private Peering](../virtual-wan/vpn-over-expressroute.md)), enabling the VM to connect securely to customerΓÇÖs on-premises resources as though it was directly on that network.
+**External traffic (orange line)** ΓÇô For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When a customer places a public IP on their VNet gateway, traffic from the public Internet or customer on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when a customer establishes private peering over an ExpressRoute connection, it is connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of customerΓÇÖs on-premises network. A cryptographically protected [IPsec/IKE tunnel](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) is established between Azure and customerΓÇÖs internal network (e.g., via [Azure VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md) or [Azure ExpressRoute Private Peering](../virtual-wan/vpn-over-expressroute.md)), enabling the VM to connect securely to customerΓÇÖs on-premises resources as though it was directly on that network.
-At the Internet Edge Router or the MSEE Router, the packet is encapsulated using Generic Routing Encapsulation (GRE). This encapsulation uses a unique identifier specific to the VNet destination and the destination address, which is used to appropriately route the traffic to the identified VNet. Upon reaching the VNet Gateway, which is a special VNet used only to accept traffic from outside of an Azure VNet, the encapsulation is verified by the Azure network fabric to ensure: a) the endpoint receiving the packet is a match to the unique VNet ID used to route the data, and b) the destination address requested exists in this VNet. Once verified, the packet is routed as internal traffic from the VNet Gateway to the final requested destination address within the VNet. This approach ensures that traffic from external networks travels only to Azure VNet for which it is destined, enforcing isolation.
+At the Internet Edge Router or the MSEE Router, the packet is encapsulated using Generic Routing Encapsulation (GRE). This encapsulation uses a unique identifier specific to the VNet destination and the destination address, which is used to appropriately route the traffic to the identified VNet. Upon reaching the VNet Gateway, which is a special VNet used only to accept traffic from outside of an Azure VNet, the encapsulation is verified by the Azure network fabric to ensure: a) the endpoint receiving the packet is a match to the unique VNet ID used to route the data, and b) the destination address requested exists in this VNet. Once verified, the packet is routed as internal traffic from the VNet Gateway to the final requested destination address within the VNet. This approach ensures that traffic from external networks travels only to Azure VNet for which it is destined, enforcing isolation.
-**Internal traffic (blue line)** ΓÇô Internal traffic also uses GRE encapsulation/tunneling. When two resources in an Azure VNet attempt to establish communications between each other, the Azure network fabric reaches out to the Azure VNet routing directory service that is part of the Azure network fabric. The directory services use the customer address (CA) and the requested destination address to determine the provider address (PA). This information, including the VNet identifier, CA, and PA, is then used to encapsulate the traffic with GRE. The Azure network uses this information to properly route the encapsulated data to the appropriate Azure host using the PA. The encapsulation is reviewed by the Azure network fabric to confirm: (1) the PA is a match, (2) the CA is located at this PA, and (3) the VNet identifier is a match. Once all three are verified, the encapsulation is removed and routed to the CA as normal traffic (e.g., to a VM endpoint). This approach provides VNet isolation assurance based on correct traffic routing between cloud services.
+**Internal traffic (blue line)** ΓÇô Internal traffic also uses GRE encapsulation/tunneling. When two resources in an Azure VNet attempt to establish communications between each other, the Azure network fabric reaches out to the Azure VNet routing directory service that is part of the Azure network fabric. The directory services use the customer address (CA) and the requested destination address to determine the provider address (PA). This information, including the VNet identifier, CA, and PA, is then used to encapsulate the traffic with GRE. The Azure network uses this information to properly route the encapsulated data to the appropriate Azure host using the PA. The encapsulation is reviewed by the Azure network fabric to confirm: (1) the PA is a match, (2) the CA is located at this PA, and (3) the VNet identifier is a match. Once all three are verified, the encapsulation is removed and routed to the CA as normal traffic (e.g., to a VM endpoint). This approach provides VNet isolation assurance based on correct traffic routing between cloud services.
-Azure VNets implement several mechanisms to ensure secure traffic between tenants. These mechanisms align to existing industry standards and security practices, and prevent well-known attack vectors including:
+Azure VNets implement several mechanisms to ensure secure traffic between tenants. These mechanisms align to existing industry standards and security practices, and prevent well-known attack vectors including:
-- **Prevent IP address spoofing** ΓÇô Whenever encapsulated traffic is transmitted by a VNet, the service reverifies the information on the receiving end of the transmission. The traffic is looked up and encapsulated independently at the start of the transmission, as well as reverified at the receiving endpoint to ensure the transmission was performed appropriately. This verification is done with an internal VNet feature called SpoofGuard, which verifies that the source and destination are valid and allowed to communicate, thereby preventing mismatches in expected encapsulation patterns that might otherwise permit spoofing. The GRE encapsulation processes prevent spoofing as any GRE encapsulation and encryption not done by the Azure network fabric is treated as dropped traffic.-- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that a customer is always operating within their unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that has not been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded.-- **Prevent traffic from crossing between VNets** ΓÇô Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users do not have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Consequently, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic.
+- **Prevent IP address spoofing** ΓÇô Whenever encapsulated traffic is transmitted by a VNet, the service reverifies the information on the receiving end of the transmission. The traffic is looked up and encapsulated independently at the start of the transmission, as well as reverified at the receiving endpoint to ensure the transmission was performed appropriately. This verification is done with an internal VNet feature called SpoofGuard, which verifies that the source and destination are valid and allowed to communicate, thereby preventing mismatches in expected encapsulation patterns that might otherwise permit spoofing. The GRE encapsulation processes prevent spoofing as any GRE encapsulation and encryption not done by the Azure network fabric is treated as dropped traffic.
+- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that a customer is always operating within their unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that has not been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded.
+- **Prevent traffic from crossing between VNets** ΓÇô Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users do not have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Consequently, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic.
-In addition to these key protections, all unexpected traffic originating from the Internet is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand.
+In addition to these key protections, all unexpected traffic originating from the Internet is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand.
-Moreover, the Azure network fabric blocks traffic from any IPs originating in the Azure network fabric space that are spoofed. The Azure network fabric uses GRE and Virtual Extensible LAN (VXLAN) to validate that all allowed traffic is Azure-controlled traffic and all non-Azure GRE traffic is blocked. By using GRE tunnels and VXLAN to segment traffic using customer unique keys, Azure meets [RFC 3809](https://datatracker.ietf.org/doc/rfc3809/) and [RFC 4110](https://datatracker.ietf.org/doc/rfc4110/). When using Azure VPN Gateway in combination with ExpressRoute, Azure meets [RFC 4111](https://datatracker.ietf.org/doc/rfc4111/) and [RFC 4364](https://datatracker.ietf.org/doc/rfc4364/). With a comprehensive approach for isolation encompassing external and internal network traffic, Azure VNets provide customers with assurance that Azure successfully routes traffic between VNets, allows proper network segmentation for tenants with overlapping address spaces, and prevents IP address spoofing.
+Moreover, the Azure network fabric blocks traffic from any IPs originating in the Azure network fabric space that are spoofed. The Azure network fabric uses GRE and Virtual Extensible LAN (VXLAN) to validate that all allowed traffic is Azure-controlled traffic and all non-Azure GRE traffic is blocked. By using GRE tunnels and VXLAN to segment traffic using customer unique keys, Azure meets [RFC 3809](https://datatracker.ietf.org/doc/rfc3809/) and [RFC 4110](https://datatracker.ietf.org/doc/rfc4110/). When using Azure VPN Gateway in combination with ExpressRoute, Azure meets [RFC 4111](https://datatracker.ietf.org/doc/rfc4111/) and [RFC 4364](https://datatracker.ietf.org/doc/rfc4364/). With a comprehensive approach for isolation encompassing external and internal network traffic, Azure VNets provide customers with assurance that Azure successfully routes traffic between VNets, allows proper network segmentation for tenants with overlapping address spaces, and prevents IP address spoofing.
-Customers are also able to utilize Azure services to further isolate and protect their resources. Using [Network Security Groups](../virtual-network/manage-network-security-group.md) (NSGs), a feature of Azure Virtual Network, customers can filter traffic by source and destination IP address, port, and protocol via multiple inbound and outbound security rules ΓÇô essentially acting as a distributed virtual firewall and IP-based network Access Control List (ACL). Customers can apply an NSG to each NIC in a Virtual Machine, apply an NSG to the subnet that a NIC, or another Azure resource, is connected to, and directly to Virtual Machine Scale Sets, allowing finer control over the customer infrastructure.
+Customers are also able to utilize Azure services to further isolate and protect their resources. Using [network security groups](../virtual-network/manage-network-security-group.md) (NSGs), a feature of Azure Virtual Network, customers can filter traffic by source and destination IP address, port, and protocol via multiple inbound and outbound security rules ΓÇô essentially acting as a distributed virtual firewall and IP-based network access control list (ACL). Customers can apply an NSG to each NIC in a Virtual Machine, apply an NSG to the subnet that a NIC, or another Azure resource, is connected to, and directly to Virtual Machine Scale Sets, allowing finer control over the customer infrastructure.
-At the infrastructure layer, Azure implements a Hypervisor firewall to protect all tenants running on top of the Hypervisor within virtual machines from unauthorized access. This Hypervisor firewall is distributed as part of the NSG rules deployed to the Host, implemented in the Hypervisor, and configured by the Fabric Controller agent, as shown in Figure 4. The Host OS instances utilize the built-in Windows Firewall to implement fine-grained ACLs at a greater granularity than router ACLs and are maintained by the same software that provisions tenants, so they are never out of date. They are applied using the Machine Configuration File (MCF) to Windows Firewall.
+At the infrastructure layer, Azure implements a Hypervisor firewall to protect all tenants running on top of the Hypervisor within virtual machines from unauthorized access. This Hypervisor firewall is distributed as part of the NSG rules deployed to the Host, implemented in the Hypervisor, and configured by the Fabric Controller agent, as shown in Figure 4. The Host OS instances utilize the built-in Windows Firewall to implement fine-grained ACLs at a greater granularity than router ACLs and are maintained by the same software that provisions tenants, so they are never out of date. They are applied using the Machine Configuration File (MCF) to Windows Firewall.
-At the top of the operating system stack is the Guest OS, which customers utilize as their operating system. By default, this layer does not allow any inbound communication to cloud service or virtual network, essentially making it part of a private network. For PaaS Web and Worker roles, remote access is not permitted by default. It is possible for customers to enable Remote Desktop Protocol (RDP) access as an explicit option. For IaaS VMs created using the Azure portal, RDP and remote PowerShell ports are opened by default; however, port numbers are assigned randomly. For IaaS VMs created via PowerShell, RDP and remote PowerShell ports must be opened explicitly. If the administrator chooses to keep the RDP and remote PowerShell ports open to the Internet, the account allowed to create RDP and PowerShell connections should be secured with a strong password. Even if ports are open, customers can define ACLs on the public IPs for additional protection if desired.
+At the top of the operating system stack is the Guest OS, which customers utilize as their operating system. By default, this layer does not allow any inbound communication to cloud service or virtual network, essentially making it part of a private network. For PaaS Web and Worker roles, remote access is not permitted by default. It is possible for customers to enable Remote Desktop Protocol (RDP) access as an explicit option. For IaaS VMs created using the Azure portal, RDP and remote PowerShell ports are opened by default; however, port numbers are assigned randomly. For IaaS VMs created via PowerShell, RDP and remote PowerShell ports must be opened explicitly. If the administrator chooses to keep the RDP and remote PowerShell ports open to the Internet, the account allowed to create RDP and PowerShell connections should be secured with a strong password. Even if ports are open, customers can define ACLs on the public IPs for additional protection if desired.
### Service tags
-Customers can use Virtual network [service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect their Azure resources from the Internet while accessing Azure services that have public endpoints. With service tags, customers can define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules.
+Customers can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect their Azure resources from the Internet while accessing Azure services that have public endpoints. With service tags, customers can define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules.
> [!NOTE] > Customers can create inbound/outbound network security group rules to deny traffic to/from the Internet and allow traffic to/from Azure. Service tags are available for a wide range of Azure services for use in network security group rules.
Azure private endpoint is a network interface that connects customers privately
From the networking isolation standpoint, key benefits of Azure Private Link include: -- Customers can connect their VNet to services in Azure without a public IP address at the source or destination. Azure Private Link handles the connectivity between the service and its consumers over the Microsoft global backbone network.-- Customers can access services running in Azure from on-premises over ExpressRoute private peering, VPN tunnels, and peered virtual networks using private endpoints. Azure Private Link eliminates the need to set up public peering or traverse the Internet to reach the service.
+- Customers can connect their VNet to services in Azure without a public IP address at the source or destination. Azure Private Link handles the connectivity between the service and its consumers over the Microsoft global backbone network.
+- Customers can access services running in Azure from on-premises over ExpressRoute private peering, VPN tunnels, and peered virtual networks using private endpoints. Azure Private Link eliminates the need to set up public peering or traverse the Internet to reach the service.
- Customers can connect privately to services running in other Azure regions. > [!NOTE]
-> Customers can use the Azure portal to manage private endpoint connections on Azure PaaS resources. For customer/partner owned Private Link services, Azure Power Shell and Azure CLI are the preferred methods for managing private endpoint connections.
+> Customers can use the Azure portal to manage private endpoint connections on Azure PaaS resources. For customer/partner owned Private Link services, Azure Power Shell and Azure CLI are the preferred methods for managing private endpoint connections.
> > *Additional resources:*
-> - **[How to manage private endpoint connections on Azure PaaS resources](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources)**
-> - **[How to manage private endpoint connections on customer/partner owned Private Link service](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-a-customerpartner-owned-private-link-service)**
+> - **[How to manage private endpoint connections on Azure PaaS resources](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources)**
+> - **[How to manage private endpoint connections on customer/partner owned Private Link service](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-a-customerpartner-owned-private-link-service)**
### Data encryption in transit
-Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). **Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception**. Data in transit applies to scenarios involving data traveling between:
+Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). **Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception**. Data in transit applies to scenarios involving data traveling between:
- CustomerΓÇÖs end users and Azure service-- CustomerΓÇÖs on-premises datacenter and Azure region-- Microsoft datacenters as part of expected Azure service operation
+- CustomerΓÇÖs on-premises datacenter and Azure region
+- Microsoft datacenters as part of expected Azure service operation
#### CustomerΓÇÖs end users connection to Azure service
-**Transport Layer Security (TLS):** Azure uses the TLS protocol to help protect data when it is traveling between customers and Azure services. Most customer end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Online Services [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.
+**Transport Layer Security (TLS):** Azure uses the TLS protocol to help protect data when it is traveling between customers and Azure services. Most customer end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.
> [!IMPORTANT]
-> Customers can increase security by enabling encryption in transit. For example, customers can use **[Azure Application Gateway](../application-gateway/ssl-overview.md)** to configure **[end-to-end encryption](../application-gateway/application-gateway-end-to-end-ssl-powershell.md)** of network traffic and rely on **[Azure Key Vault integration](../application-gateway/key-vault-certs.md)** for TLS termination.
+> Customers can increase security by enabling encryption in transit. For example, customers can use **[Azure Application Gateway](../application-gateway/ssl-overview.md)** to configure **[end-to-end encryption](../application-gateway/application-gateway-end-to-end-ssl-powershell.md)** of network traffic and rely on **[Azure Key Vault integration](../application-gateway/key-vault-certs.md)** for TLS termination.
-Across Azure services, traffic to and from the service is [protected by TLS 1.2](https://azure.microsoft.com/updates/azuretls12/) leveraging RSA-2048 for key exchange and AES-256 for data encryption. The corresponding crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server).
+Across Azure services, traffic to and from the service is [protected by TLS 1.2](https://azure.microsoft.com/updates/azuretls12/) leveraging RSA-2048 for key exchange and AES-256 for data encryption. The corresponding crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server).
-TLS provides strong authentication, message privacy, and integrity. [Perfect Forward Secrecy](https://en.wikipedia.org/wiki/Forward_secrecy) (PFS) protects connections between customerΓÇÖs client systems and Microsoft cloud services by generating a unique session key for every session a customer initiate. PFS protects past sessions against potential future key compromises. This combination makes it more difficult to intercept and access data in transit.
+TLS provides strong authentication, message privacy, and integrity. [Perfect Forward Secrecy](https://en.wikipedia.org/wiki/Forward_secrecy) (PFS) protects connections between customerΓÇÖs client systems and Microsoft cloud services by generating a unique session key for every session a customer initiate. PFS protects past sessions against potential future key compromises. This combination makes it more difficult to intercept and access data in transit.
-**In-transit encryption for VMs:** Remote sessions to Windows and Linux VMs deployed in Azure can be conducted over protocols that ensure data encryption in transit. For example, the [Remote Desktop Protocol](/windows/win32/termserv/remote-desktop-protocol) (RDP) initiated from a client computer to Windows and Linux VMs enables TLS protection for data in transit. Customers can also use [Secure Shell](../virtual-machines/linux/ssh-from-windows.md) (SSH) to connect to Linux VMs running in Azure. SSH is an encrypted connection protocol available by default for remote management of Linux VMs hosted in Azure.
+**In-transit encryption for VMs:** Remote sessions to Windows and Linux VMs deployed in Azure can be conducted over protocols that ensure data encryption in transit. For example, the [Remote Desktop Protocol](/windows/win32/termserv/remote-desktop-protocol) (RDP) initiated from a client computer to Windows and Linux VMs enables TLS protection for data in transit. Customers can also use [Secure Shell](../virtual-machines/linux/ssh-from-windows.md) (SSH) to connect to Linux VMs running in Azure. SSH is an encrypted connection protocol available by default for remote management of Linux VMs hosted in Azure.
> [!IMPORTANT]
-> Customers should review best practices for network security, including guidance for **[disabling RDP/SSH access to Virtual Machines](../security/fundamentals/network-best-practices.md#disable-rdpssh-access-to-virtual-machines)** from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via **[point-to-site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)**, **[site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)**, or **[ExpressRoute](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)**.
+> Customers should review best practices for network security, including guidance for **[disabling RDP/SSH access to Virtual Machines](../security/fundamentals/network-best-practices.md#disable-rdpssh-access-to-virtual-machines)** from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via **[point-to-site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)**, **[site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)**, or **[ExpressRoute](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)**.
-**Azure Storage transactions:** When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, customers can configure their storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
+**Azure Storage transactions:** When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, customers can configure their storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
-[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard [Server Message Block](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) (SMB) protocol. By default, all Azure storage accounts [have encryption in transit enabled](../storage/files/storage-files-planning.md#encryption-in-transit). Consequently, when mounting a share over SMB or accessing it through the Azure portal (or PowerShell, CLI, and Azure SDKs), Azure Files will only allow the connection if it is made with SMB 3.0+ with encryption or over HTTPS.
+[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard [Server Message Block](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) (SMB) protocol. By default, all Azure storage accounts [have encryption in transit enabled](../storage/files/storage-files-planning.md#encryption-in-transit). Consequently, when mounting a share over SMB or accessing it through the Azure portal (or PowerShell, CLI, and Azure SDKs), Azure Files will only allow the connection if it is made with SMB 3.0+ with encryption or over HTTPS.
#### CustomerΓÇÖs datacenter connection to Azure region
-**VPN encryption:** [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of a customerΓÇÖs internal (on-premises) network. With VNet, customers choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they will not collide with addresses the customer is using elsewhere. Customers have options to securely connect to a VNet from their on-premises infrastructure or remote locations.
+**VPN encryption:** [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of a customerΓÇÖs internal (on-premises) network. With VNet, customers choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they will not collide with addresses the customer is using elsewhere. Customers have options to securely connect to a VNet from their on-premises infrastructure or remote locations.
-- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and the customerΓÇÖs internal network, allowing an Azure VM to connect to the customerΓÇÖs back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. Customers can use [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between their VNet and their on-premises infrastructure across the public Internet, e.g., a [site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md) relies on IPsec for transport encryption. Azure VPN Gateway supports a wide range of encryption algorithms that are FIPS 140-2 validated. Moreover, customers can configure Azure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).-- **Point-to-Site** (VPN over SSTP, OpenVPN, and IPsec) ΓÇô A secure connection is established from an individual client computer to customerΓÇÖs VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the [Point-to-Site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) configuration, customers need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. [Point-to-Site VPN](../vpn-gateway/point-to-site-about.md) connections do not require a VPN device or a public facing IP address.
+- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and the customerΓÇÖs internal network, allowing an Azure VM to connect to the customerΓÇÖs back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. Customers can use [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between their VNet and their on-premises infrastructure across the public Internet, e.g., a [site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md) relies on IPsec for transport encryption. Azure VPN Gateway supports a wide range of encryption algorithms that are FIPS 140-2 validated. Moreover, customers can configure Azure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).
+- **Point-to-Site** (VPN over SSTP, OpenVPN, and IPsec) ΓÇô A secure connection is established from an individual client computer to customerΓÇÖs VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the [Point-to-Site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) configuration, customers need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. [Point-to-Site VPN](../vpn-gateway/point-to-site-about.md) connections do not require a VPN device or a public facing IP address.
-In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides customers with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (e.g., Azure VPN Gateway). This enforcement allows customers to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-does-my-vpn-tunnel-get-authenticated) whereby Microsoft generates a PSK when the VPN tunnel is created. Customers can change the autogenerated PSK to their own.
+In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides customers with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (e.g., Azure VPN Gateway). This enforcement allows customers to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-does-my-vpn-tunnel-get-authenticated) whereby Microsoft generates a PSK when the VPN tunnel is created. Customers can change the autogenerated PSK to their own.
-**ExpressRoute encryption:** [ExpressRoute](../expressroute/expressroute-introduction.md) allows customers to create private connections between Microsoft datacenters and their on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet. Customers can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enables customers to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, i.e., data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). Customers can use MACsec to encrypt the physical links between their network devices and Microsoft network devices when they connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md).
+**ExpressRoute encryption:** [ExpressRoute](../expressroute/expressroute-introduction.md) allows customers to create private connections between Microsoft datacenters and their on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet. Customers can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enables customers to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, i.e., data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). Customers can use MACsec to encrypt the physical links between their network devices and Microsoft network devices when they connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from customer's edge to the Microsoft Enterprise edge routers at the peering locations.
-Customers can enable IPsec in addition to MACsec on their ExpressRoute Direct ports, as shown in Figure 11. Using Azure VPN Gateway, customers can set up an [IPsec tunnel over Microsoft Peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md) of customerΓÇÖs ExpressRoute circuit between customerΓÇÖs on-premises network and customerΓÇÖs Azure VNet. MACsec secures the physical connection between customerΓÇÖs on-premises network and Microsoft. IPsec secures the end-to-end connection between customerΓÇÖs on-premises network and their VNets in Azure. MACsec and IPsec can be enabled independently.
+Customers can enable IPsec in addition to MACsec on their ExpressRoute Direct ports, as shown in Figure 11. Using Azure VPN Gateway, customers can set up an [IPsec tunnel over Microsoft Peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md) of customerΓÇÖs ExpressRoute circuit between customerΓÇÖs on-premises network and customerΓÇÖs Azure VNet. MACsec secures the physical connection between customerΓÇÖs on-premises network and Microsoft. IPsec secures the end-to-end connection between customerΓÇÖs on-premises network and their VNets in Azure. MACsec and IPsec can be enabled independently.
:::image type="content" source="./media/secure-isolation-fig11.png" alt-text="VPN and ExpressRoute encryption for data in transit" border="false"::: **Figure 11.** VPN and ExpressRoute encryption for data in transit #### Traffic across Microsoft global network backbone
-Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md) (GRS) and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same Geo; however, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability.
+Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md) (GRS) and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same Geo; however, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability.
-Moreover, all Azure traffic traveling within a region or between regions is [encrypted by Microsoft using MACsec](../security/fundamentals/encryption-overview.md#data-link-layer-encryption-in-azure), which relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 160,000 km of lit fiber optic and undersea cable systems.
+Moreover, all Azure traffic traveling within a region or between regions is [encrypted by Microsoft using MACsec](../security/fundamentals/encryption-overview.md#data-link-layer-encryption-in-azure), which relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 160,000 km of lit fiber optic and undersea cable systems.
> [!IMPORTANT]
-> Customers should review Azure **[best practices](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit)** for the protection of data in transit to help ensure that all data in transit is encrypted. For key Azure PaaS storage services (e.g., Azure SQL Database), data encryption in transit is **[enforced by default](../azure-sql/database/security-overview.md#information-protection-and-encryption)**.
+> Customers should review Azure **[best practices](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit)** for the protection of data in transit to help ensure that all data in transit is encrypted. For key Azure PaaS storage services (for example, Azure SQL Database, SQL Managed Instance, and Azure Synapse Analytics), data encryption in transit is **[enforced by default](../azure-sql/database/security-overview.md#information-protection-and-encryption)**.
### Third-party network virtual appliances
-Azure provides customers with many features to help them achieve their security and isolation goals, including [Azure Security Center](../security-center/security-center-introduction.md), [Azure Monitor](../azure-monitor/overview.md), [Azure Firewall](../firewall/overview.md), [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md), [Network Security Groups](../virtual-network/network-security-groups-overview.md), [Azure Application Gateway](../application-gateway/overview.md), [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), [Network Watcher](../network-watcher/network-watcher-monitoring-overview.md), [Azure Sentinel](../sentinel/overview.md), and [Azure Policy](../governance/policy/overview.md). In addition to the built-in capabilities that Azure provides, customers can use third-party [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) to accommodate their specific network isolation requirements while at the same time leveraging existing in-house skills. Azure supports a wide range of appliances, including offerings from F5, Palo Alto Networks, Cisco, Check Point, Barracuda, Citrix, Fortinet, and many others. Network appliances support network functionality and services in the form of VMs in customer virtual networks and deployments.
+Azure provides customers with many features to help them achieve their security and isolation goals, including [Azure Security Center](../security-center/security-center-introduction.md), [Azure Monitor](../azure-monitor/overview.md), [Azure Firewall](../firewall/overview.md), [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md), [Network Security Groups](../virtual-network/network-security-groups-overview.md), [Azure Application Gateway](../application-gateway/overview.md), [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), [Network Watcher](../network-watcher/network-watcher-monitoring-overview.md), [Azure Sentinel](../sentinel/overview.md), and [Azure Policy](../governance/policy/overview.md). In addition to the built-in capabilities that Azure provides, customers can use third-party [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) to accommodate their specific network isolation requirements while at the same time leveraging existing in-house skills. Azure supports a wide range of appliances, including offerings from F5, Palo Alto Networks, Cisco, Check Point, Barracuda, Citrix, Fortinet, and many others. Network appliances support network functionality and services in the form of VMs in customer virtual networks and deployments.
-The cumulative effect of network isolation restrictions is that each cloud service acts as though it were on an isolated network where VMs within the cloud service can communicate with one another, identifying one another by their source IP addresses with confidence that no other parties can impersonate their peer VMs. They can also be configured to accept incoming connections from the Internet over specific ports and protocols and to ensure that all network traffic leaving customer Virtual Networks is always encrypted.
+The cumulative effect of network isolation restrictions is that each cloud service acts as though it were on an isolated network where VMs within the cloud service can communicate with one another, identifying one another by their source IP addresses with confidence that no other parties can impersonate their peer VMs. They can also be configured to accept incoming connections from the Internet over specific ports and protocols and to ensure that all network traffic leaving customer Virtual Networks is always encrypted.
> [!TIP] > Customers should review published Azure networking documentation for guidance on how to use native security features to help protect their data.
The cumulative effect of network isolation restrictions is that each cloud servi
> - **[Azure network security white paper](https://azure.microsoft.com/resources/azure-network-security/)** ## Storage isolation
-Microsoft Azure separates customer VM-based computation resources from storage as part of its [fundamental design](../security/fundamentals/isolation-choices.md#storage-isolation). The separation allows computation and storage to scale independently, making it easier to provide multi-tenancy and isolation. Consequently, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically.
+Microsoft Azure separates customer VM-based computation resources from storage as part of its [fundamental design](../security/fundamentals/isolation-choices.md#storage-isolation). The separation allows computation and storage to scale independently, making it easier to provide multi-tenancy and isolation. Consequently, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically.
Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscriptions/) can have one or more storage accounts. Azure storage supports various [authentication options](/rest/api/storageservices/authorize-requests-to-azure-storage), including: -- **Shared symmetric keys:** Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. These keys can be rotated and regenerated by customers at any point thereafter without coordination with their applications. -- **Azure AD-based authentication:** Access to Azure Storage can be controlled by Azure Active Directory (Azure AD), which enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, including Microsoft insiders. More information about Azure AD tenant isolation is available from a white paper [Azure Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).-- **Shared access signatures (SAS):** Shared access signatures or ΓÇ£pre-signed URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be signification limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.-- **User delegation SAS:** Delegated authentication is similar to SAS but is [based on Azure AD tokens](/rest/api/storageservices/create-user-delegation-sas) rather than the shared symmetric keys. This approach allows a service that authenticates with Azure AD to create a pre signed URL with limited scope and grant temporary access to another user, service, or device.-- **Anonymous public read access:** Customers can allow a small portion of their storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level for customers who desire more stringent control.
+- **Shared symmetric keys:** Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. These keys can be rotated and regenerated by customers at any point thereafter without coordination with their applications.
+- **Azure AD-based authentication:** Access to Azure Storage can be controlled by Azure Active Directory (Azure AD), which enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, including Microsoft insiders. More information about Azure AD tenant isolation is available from a white paper [Azure Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).
+- **Shared access signatures (SAS):** Shared access signatures or ΓÇ£pre-signed URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be signification limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.
+- **User delegation SAS:** Delegated authentication is similar to SAS but is [based on Azure AD tokens](/rest/api/storageservices/create-user-delegation-sas) rather than the shared symmetric keys. This approach allows a service that authenticates with Azure AD to create a pre signed URL with limited scope and grant temporary access to another user, service, or device.
+- **Anonymous public read access:** Customers can allow a small portion of their storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level for customers who desire more stringent control.
Azure Storage provides storage for a wide variety of workloads, including:
Azure Storage provides storage for a wide variety of workloads, including:
- Network file shares in the cloud (File storage) - Serving web pages on the Internet (static websites)
-While Azure Storage supports a wide range of different externally facing customer storage scenarios, internally, the physical storage for the above services is managed by a common set of APIs. To provide durability and availability, Azure Storage relies on data replication and data partitioning across storage resources that are shared among tenants. To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers as described in this section.
+While Azure Storage supports a wide range of different externally facing customer storage scenarios, internally, the physical storage for the above services is managed by a common set of APIs. To provide durability and availability, Azure Storage relies on data replication and data partitioning across storage resources that are shared among tenants. To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers as described in this section.
### Data replication
-Customer data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies customer data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. Customers can typically choose to replicate their data within the same data center, across [availability zones within the same region](../availability-zones/az-overview.md), or across geographically separated regions. Specifically, when creating a storage account, customers can select one of the following [redundancy options](../storage/common/storage-redundancy.md#summary-of-redundancy-options):
+Customer data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies customer data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. Customers can typically choose to replicate their data within the same data center, across [availability zones within the same region](../availability-zones/az-overview.md), or across geographically separated regions. Specifically, when creating a storage account, customers can select one of the following [redundancy options](../storage/common/storage-redundancy.md#summary-of-redundancy-options):
-- **Locally redundant storage (LRS)** replicates three copies (or the erasure coded equivalent, as described later) of customer data within a single data center. A write request to an LRS storage account returns successfully only after the data is written to all three replicas. Each replica resides in separate fault and upgrade domains within a scale unit (set of storage racks within a data center).-- **Zone-redundant storage (ZRS)** replicates customer data synchronously across three storage clusters in a single [region](../availability-zones/az-overview.md#regions). Each storage cluster is physically separated from the others and is in its own [Availability Zone](../availability-zones/az-overview.md#availability-zones) (AZ). A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.-- **Geo-redundant storage (GRS)** replicates customer data to a [secondary (paired) region](../best-practices-availability-paired-regions.md) region that is hundreds of miles away from the primary region. GRS storage accounts are durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable. For a storage account with GRS or RA-GRS enabled, all data is first replicated with LRS. An update is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region using GRS. When data is written to the secondary location, it's also replicated within that location using LRS.-- **Read-access geo-redundant storage (RA-GRS)** is based on GRS. It provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. With RA-GRS, customers can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.-- **Geo-zone-redundant storage (GZRS)** combines the high availability of ZRS with protection from regional outages as provided by GRS. Data in a GZRS storage account is replicated across three AZs in the primary region and also replicated to a secondary geographic region for protection from regional disasters. Each Azure region is paired with another region within the same geography, together making a [regional pair](../best-practices-availability-paired-regions.md).-- **Read-access geo-zone-redundant storage (RA-GZRS)** is based on GZRS. Customers can optionally enable read access to data in the secondary region with RA-GZRS if their applications need to be able to read data in the event of a disaster in the primary region.
+- **Locally redundant storage (LRS)** replicates three copies (or the erasure coded equivalent, as described later) of customer data within a single data center. A write request to an LRS storage account returns successfully only after the data is written to all three replicas. Each replica resides in separate fault and upgrade domains within a scale unit (set of storage racks within a data center).
+- **Zone-redundant storage (ZRS)** replicates customer data synchronously across three storage clusters in a single [region](../availability-zones/az-overview.md#regions). Each storage cluster is physically separated from the others and is in its own [Availability Zone](../availability-zones/az-overview.md#availability-zones) (AZ). A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.
+- **Geo-redundant storage (GRS)** replicates customer data to a [secondary (paired) region](../best-practices-availability-paired-regions.md) region that is hundreds of kilometers away from the primary region. GRS storage accounts are durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable. For a storage account with GRS or RA-GRS enabled, all data is first replicated with LRS. An update is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region using GRS. When data is written to the secondary location, it's also replicated within that location using LRS.
+- **Read-access geo-redundant storage (RA-GRS)** is based on GRS. It provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. With RA-GRS, customers can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.
+- **Geo-zone-redundant storage (GZRS)** combines the high availability of ZRS with protection from regional outages as provided by GRS. Data in a GZRS storage account is replicated across three AZs in the primary region and also replicated to a secondary geographic region for protection from regional disasters. Each Azure region is paired with another region within the same geography, together making a [regional pair](../best-practices-availability-paired-regions.md).
+- **Read-access geo-zone-redundant storage (RA-GZRS)** is based on GZRS. Customers can optionally enable read access to data in the secondary region with RA-GZRS if their applications need to be able to read data in the event of a disaster in the primary region.
### High-level Azure Storage architecture
-Azure Storage production systems consist of storage stamps and the location service (LS), as shown in Figure 12. A storage stamp is a cluster of racks of storage nodes, where each rack is built as a separate fault domain with redundant networking and power. The LS manages all the storage stamps, as well as the account namespace across all stamps. It allocates accounts to storage stamps and manages them across the storage stamps for load balancing and disaster recovery. The LS itself is distributed across two geographic locations for its own disaster recovery ([Calder, et al., 2011](https://sigops.org/s/conferences/sosp/2011/current/2011-Cascais/printable/11-calder.pdf)).
+Azure Storage production systems consist of storage stamps and the location service (LS), as shown in Figure 12. A storage stamp is a cluster of racks of storage nodes, where each rack is built as a separate fault domain with redundant networking and power. The LS manages all the storage stamps, as well as the account namespace across all stamps. It allocates accounts to storage stamps and manages them across the storage stamps for load balancing and disaster recovery. The LS itself is distributed across two geographic locations for its own disaster recovery ([Calder, et al., 2011](https://sigops.org/s/conferences/sosp/2011/current/2011-Cascais/printable/11-calder.pdf)).
:::image type="content" source="./media/secure-isolation-fig12.png" alt-text="High-level Azure Storage architecture"::: **Figure 12.** High-level Azure Storage architecture (Source: [Calder, et al., 2011](https://sigops.org/s/conferences/sosp/2011/current/2011-Cascais/printable/11-calder.pdf))
Azure Storage production systems consist of storage stamps and the location serv
There are three layers within a storage stamp: front-end, partition, and stream, which are described in the rest of this section. #### Front-end layer
-The front-end (FE) layer consists of a set of stateless servers that take the incoming requests, authenticate and authorize the requests, and then route them to a partition server in the Partition Layer. The FE layer knows what partition server to forward each request to, since each front-end server caches a Partition Map. The Partition Map keeps track of the partitions for the service being accessed and what partition server is controlling (serving) access to each partition in the system. The FE servers also stream large objects directly from the stream layer.
+The front-end (FE) layer consists of a set of stateless servers that take the incoming requests, authenticate and authorize the requests, and then route them to a partition server in the Partition Layer. The FE layer knows what partition server to forward each request to, since each front-end server caches a partition map. The partition map keeps track of the partitions for the service being accessed and what partition server is controlling (serving) access to each partition in the system. The FE servers also stream large objects directly from the stream layer.
-Transferring large volumes of data across the Internet is inherently unreliable. Using Azure block blobs service, users can upload and store large files efficiently by breaking up large files into smaller blocks of data. In this manner, block blobs allow partitioning of data into individual blocks for reliability of large uploads, as shown in Figure 13. Each block can be up to 100 MB in size with up to 50,000 blocks in the block blob. If a block fails to transmit correctly, only that particular block needs to be resent versus having to resend the entire file again. In addition, with a block blob, multiple blocks can be sent in parallel to decrease upload time.
+Transferring large volumes of data across the Internet is inherently unreliable. Using Azure block blobs service, users can upload and store large files efficiently by breaking up large files into smaller blocks of data. In this manner, block blobs allow partitioning of data into individual blocks for reliability of large uploads, as shown in Figure 13. Each block can be up to 100 MB in size with up to 50,000 blocks in the block blob. If a block fails to transmit correctly, only that particular block needs to be resent versus having to resend the entire file again. In addition, with a block blob, multiple blocks can be sent in parallel to decrease upload time.
:::image type="content" source="./media/secure-isolation-fig13.png" alt-text="Block blob partitioning of data into individual blocks"::: **Figure 13.** Block blob partitioning of data into individual blocks
-Customers can upload blocks in any order and determine their sequence in the final block list commitment step. Customers can also upload a new block to replace an existing uncommitted block of the same block ID.
+Customers can upload blocks in any order and determine their sequence in the final block list commitment step. Customers can also upload a new block to replace an existing uncommitted block of the same block ID.
#### Partition layer
-The partition layer is responsible for a) managing higher-level data abstractions (Blob, Table, Queue), b) providing a scalable object namespace, c) providing transaction ordering and strong consistency for objects, d) storing object data on top of the stream layer, and e) caching object data to reduce disk I/O. This layer also provides asynchronous geo-replication of data and is focused on replicating data across stamps. Inter-stamp replication is done in the background to keep a copy of the data in two locations for disaster recovery purposes.
+The partition layer is responsible for a) managing higher-level data abstractions (Blob, Table, Queue), b) providing a scalable object namespace, c) providing transaction ordering and strong consistency for objects, d) storing object data on top of the stream layer, and e) caching object data to reduce disk I/O. This layer also provides asynchronous geo-replication of data and is focused on replicating data across stamps. Inter-stamp replication is done in the background to keep a copy of the data in two locations for disaster recovery purposes.
-Once a blob is ingested by the FE layer, the partition layer is responsible for tracking and storing where data is placed in the stream layer. Each storage tenant can have approximately 200 ΓÇô 300 individual partition layer nodes and each node is responsible for tracking and serving a partition of the data stored in that Storage tenant. The High Throughput Block Blob (HTBB) feature enables data to be sharded within a single blob, which allows the workload for large blobs to be shared across multiple partition layer servers (Figure 14). Distributing the load among multiple partition layer servers greatly improves availability, throughput, and durability.
+Once a blob is ingested by the FE layer, the partition layer is responsible for tracking and storing where data is placed in the stream layer. Each storage tenant can have approximately 200 ΓÇô 300 individual partition layer nodes and each node is responsible for tracking and serving a partition of the data stored in that Storage tenant. The high throughput block blob (HTBB) feature enables data to be sharded within a single blob, which allows the workload for large blobs to be shared across multiple partition layer servers (Figure 14). Distributing the load among multiple partition layer servers greatly improves availability, throughput, and durability.
-**Figure 14.** High Throughput Block Blobs spread traffic and data across multiple partition servers and streams
+**Figure 14.** High throughput block blobs spread traffic and data across multiple partition servers and streams
#### Stream layer
-The stream layer stores the bits on disk and is responsible for distributing and replicating the data across many servers to keep data durable within a storage stamp. It acts as a distributed file system layer within a stamp. It handles files, called streams, which are ordered lists of data blocks called extents that are analogous to extents on physical hard drives. Large blob objects can be stored in multiple extents, potentially on multiple physical extent nodes (ENs). The data is stored in the stream layer, but it is accessible from the partition layer. Partition servers and stream servers are colocated on each storage node in a stamp.
+The stream layer stores the bits on disk and is responsible for distributing and replicating the data across many servers to keep data durable within a storage stamp. It acts as a distributed file system layer within a stamp. It handles files, called streams, which are ordered lists of data blocks called extents that are analogous to extents on physical hard drives. Large blob objects can be stored in multiple extents, potentially on multiple physical extent nodes (ENs). The data is stored in the stream layer, but it is accessible from the partition layer. Partition servers and stream servers are colocated on each storage node in a stamp.
-The stream layer provides synchronous replication (intra-stamp) across different nodes in different fault domains to keep data durable within the stamp. It is responsible for creating the three local replicated copies of each extent. The stream layer manager makes sure that all three copies are distributed across different physical racks and nodes on different fault and upgrade domains so that copies are resilient to individual disk/node/rack failures and planned downtime due to upgrades.
+The stream layer provides synchronous replication (intra-stamp) across different nodes in different fault domains to keep data durable within the stamp. It is responsible for creating the three local replicated copies of each extent. The stream layer manager makes sure that all three copies are distributed across different physical racks and nodes on different fault and upgrade domains so that copies are resilient to individual disk/node/rack failures and planned downtime due to upgrades.
-**Erasure Coding** ΓÇô Azure Storage uses a technique called [Erasure Coding](https://www.microsoft.com/research/wp-content/uploads/2016/02/LRC12-cheng20webpage.pdf), which allows for the reconstruction of data even if some of the data is missing due to disk failure. This approach is similar to the concept of RAID striping for individual disks where data is spread across multiple disks so that if a disk is lost, the missing data can be reconstructed using the parity bits from the data on the other disks. Erasure Coding splits an extent into equal data and parity fragments that are stored on separate ENs, as shown in Figure 15.
+**Erasure Coding** ΓÇô Azure Storage uses a technique called [Erasure Coding](https://www.microsoft.com/research/wp-content/uploads/2016/02/LRC12-cheng20webpage.pdf), which allows for the reconstruction of data even if some of the data is missing due to disk failure. This approach is similar to the concept of RAID striping for individual disks where data is spread across multiple disks so that if a disk is lost, the missing data can be reconstructed using the parity bits from the data on the other disks. Erasure Coding splits an extent into equal data and parity fragments that are stored on separate ENs, as shown in Figure 15.
:::image type="content" source="./media/secure-isolation-fig15.png" alt-text="Erasure Coding further shards extent data across EN servers to protect against failure"::: **Figure 15.** Erasure Coding further shards extent data across EN servers to protect against failure
-All data blocks stored in stream extent nodes have a 64-bit cyclic redundancy check (CRC) and a header protected by a hash signature to provide extent node (EN) data integrity. The CRC and signature are checked before every disk write, disk read, and network receive. In addition, scrubber processes read all data at regular intervals verifying the CRC and looking for &#8220;bit rot&#8221;. If a bad extent is found a new copy of that extent is created to replace the bad extent.
+All data blocks stored in stream extent nodes have a 64-bit cyclic redundancy check (CRC) and a header protected by a hash signature to provide extent node (EN) data integrity. The CRC and signature are checked before every disk write, disk read, and network receive. In addition, scrubber processes read all data at regular intervals verifying the CRC and looking for &#8220;bit rot&#8221;. If a bad extent is found a new copy of that extent is created to replace the bad extent.
-Customer data in Azure Storage relies on data encryption at rest to provide cryptographic certainty for logical data isolation. Customers can choose between platform-managed encryption keys or customer-managed encryption keys. The handling of data encryption and decryption is transparent to customers, as discussed in the next section.
+Customer data in Azure Storage relies on data encryption at rest to provide cryptographic certainty for logical data isolation. Customers can choose between platform-managed encryption keys or customer-managed encryption keys. The handling of data encryption and decryption is transparent to customers, as discussed in the next section.
### Data encryption at rest
-Azure provides extensive options for [data encryption at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys, as well as customer-managed encryption keys. For more information, see [data encryption models](../security/fundamentals/encryption-models.md). This process relies on multiple encryption keys, as well as services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management.
+Azure provides extensive options for [data encryption at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys, as well as customer-managed encryption keys. For more information, see [data encryption models](../security/fundamentals/encryption-models.md). This process relies on multiple encryption keys, as well as services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management.
> [!NOTE] > Customers who require additional security and isolation assurances for their most sensitive customer data stored in Azure services can encrypt it using their own encryption keys they control in Azure Key Vault.
-In general, controlling key access and ensuring efficient bulk encryption and decryption of data is accomplished via the following types of encryption keys (as shown in Figure 16), although additional encryption keys can be used as described in *[Storage Service Encryption (SSE)](#storage-service-encryption-sse)* section.
+In general, controlling key access and ensuring efficient bulk encryption and decryption of data is accomplished via the following types of encryption keys (as shown in Figure 16), although additional encryption keys can be used as described in *[Storage service encryption](#storage-service-encryption)* section.
-- **Data Encryption Key (DEK)** is a symmetric AES-256 key that is utilized for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140-2 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. DEK is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.-- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by the customer. This key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault relies on FIPS 140-2 validated Hardware Security Modules (HSMs) for key storage and management (certificate [#2643](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/Certificate/2643)). These keys are not exportable and there can be no clear version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
+- **Data Encryption Key (DEK)** is a symmetric AES-256 key that is utilized for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140-2 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. DEK is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.
+- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by the customer. This key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140-2 validated hardware security modules (HSMs) to safeguard encryption keys. These keys are not exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
:::image type="content" source="./media/secure-isolation-fig16.png" alt-text="Data Encryption Keys are encrypted using customerΓÇÖs key stored in Azure Key Vault"::: **Figure 16.** Data Encryption Keys are encrypted using customerΓÇÖs key stored in Azure Key Vault
-Therefore, key hierarchy involves both DEKs and KEKs. DEKs are encrypted with KEKs and stored separately for efficient access by resource providers in bulk encryption and decryption operations. However, only an entity with access to the KEKs can decrypt the DEKs. The entity that has access to the KEK may be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be deleted via deletion of the KEK.
+Therefore, key hierarchy involves both DEK and KEK. DEK is encrypted with KEK and stored separately for efficient access by resource providers in bulk encryption and decryption operations. However, only an entity with access to the KEK can decrypt the DEK. The entity that has access to the KEK may be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEK, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
-Detailed information about various encryption models, as well as specifics on key management for a wide range of Azure platform services is available in online documentation. Moreover, some Azure services provide additional [encryption models](../security/fundamentals/encryption-overview.md#azure-encryption-models), including client-side encryption, to further encrypt their data using more granular controls. The rest of this section covers encryption implementation for key Azure storage scenarios such as Storage Service Encryption and Azure Disk Encryption for IaaS Virtual Machines, including server-side encryption for managed disks.
+Detailed information about various [data encryption models](../security/fundamentals/encryption-models.md) and specifics on key management for a wide range of Azure platform services is available in online documentation. Moreover, some Azure services provide additional [encryption models](../security/fundamentals/encryption-overview.md#azure-encryption-models), including client-side encryption, to further encrypt their data using more granular controls. The rest of this section covers encryption implementation for key Azure storage scenarios such as Storage service encryption and Azure Disk encryption for IaaS Virtual Machines, including server-side encryption for managed disks.
> [!TIP] > Customers should review published Azure data encryption documentation for guidance on how to protect their data.
Detailed information about various encryption models, as well as specifics on ke
> - **[Data encryption models](../security/fundamentals/encryption-models.md)** > - **[Data encryption best practices](../security/fundamentals/data-encryption-best-practices.md)**
-#### Storage Service Encryption (SSE)
-Azure [Storage Service Encryption for data at rest](../storage/common/storage-service-encryption.md) ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is encrypted through FIPS 140-2 validated 256-bit AES encryption, and the handling of encryption, decryption, and key management in Storage Service Encryption (SSE) is transparent to customers. By default, Microsoft controls the encryption keys and is responsible for key rotation, usage, and access. Keys are stored securely and protected inside a Microsoft key store. This option provides the most convenience for customers given that all Azure Storage services are supported.
+#### Storage service encryption
+Azure [Storage service encryption](../storage/common/storage-service-encryption.md) for data at rest ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is encrypted through FIPS 140-2 validated 256-bit AES encryption, and the handling of encryption, decryption, and key management in Storage service encryption is transparent to customers. By default, Microsoft controls the encryption keys and is responsible for key rotation, usage, and access. Keys are stored securely and protected inside a Microsoft key store. This option provides the most convenience for customers given that all Azure Storage services are supported.
However, customers can also choose to manage encryption with their own keys by specifying: -- [Customer-managed key](../storage/common/customer-managed-keys-overview.md) for managing Azure Storage encryption whereby the key is stored in Azure Key Vault. This option provides a lot of flexibility for customers to create, rotate, disable, and revoke access controls. Customers must use Azure Key Vault to store customer-managed keys.-- [Customer-provided key](../storage/blobs/encryption-customer-provided-keys.md) for encrypting and decrypting Blob storage only whereby the key can be stored in Azure Key Vault or in another key store on customer premises to meet regulatory compliance requirements. Customer-provided keys enable customers to pass an encryption key to Storage service using Blob APIs as part of read or write operations.
+- [Customer-managed key](../storage/common/customer-managed-keys-overview.md) for managing Azure Storage encryption whereby the key is stored in Azure Key Vault. This option provides a lot of flexibility for customers to create, rotate, disable, and revoke access to customer-managed keys. Customers must use Azure Key Vault to store customer-managed keys. Both key vaults and managed HSMs are supported, as described previously in *[Azure Key Vault](#azure-key-vault)* section.
+- [Customer-provided key](../storage/blobs/encryption-customer-provided-keys.md) for encrypting and decrypting Blob storage only whereby the key can be stored in Azure Key Vault or in another key store on customer premises to meet regulatory compliance requirements. Customer-provided keys enable customers to pass an encryption key to Storage service using Blob APIs as part of read or write operations.
> [!NOTE]
-> Customers can configure customer-managed keys (CMK) with Azure Key Vault using the **[Azure portal](../storage/common/customer-managed-keys-configure-key-vault.md)**, **[PowerShell](../storage/common/customer-managed-keys-configure-key-vault.md)**, or **[Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)** command-line tool. Customers can **[use .NET to specify a customer-provided key](../storage/blobs/storage-blob-customer-provided-key.md)** on a request to Blob storage.
+> Customers can configure customer-managed keys (CMK) with Azure Key Vault using the **[Azure portal](../storage/common/customer-managed-keys-configure-key-vault.md)**, **[PowerShell](../storage/common/customer-managed-keys-configure-key-vault.md)**, or **[Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)** command-line tool. Customers can **[use .NET to specify a customer-provided key](../storage/blobs/storage-blob-customer-provided-key.md)** on a request to Blob storage.
-SSE is enabled by default for all new and existing storage accounts and it [cannot be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-encryption). As shown in Figure 17, the encryption process leverages the following keys to help ensure cryptographic certainty of data isolation at rest:
+Storage service encryption is enabled by default for all new and existing storage accounts and it [cannot be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-encryption). As shown in Figure 17, the encryption process leverages the following keys to help ensure cryptographic certainty of data isolation at rest:
-- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is utilized for bulk encryption and it is unique per storage account in Azure Storage. It is generated by the Azure Storage service as part of the storage account creation. This key is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.-- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key that is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It is never exposed directly to the Azure Storage service or other services. Customers must use Azure Key Vault to store their customer-managed keys for Storage Service Encryption. -- *Stamp Key (SK)* is a symmetric AES-256 key that provides a third layer of encryption key security and is unique to each Azure Storage stamp, i.e., cluster of storage hardware. This key is used to perform a final wrap of the DEK and results in the following key chain hierarchy: SK(KEK(DEK)).
+- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption and it is unique per storage account in Azure Storage. It is generated by the Azure Storage service as part of the storage account creation. This key is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.
+- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key that is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It is never exposed directly to the Azure Storage service or other services. Customers must use Azure Key Vault to store their customer-managed keys for Storage service encryption.
+- *Stamp Key (SK)* is a symmetric AES-256 key that provides a third layer of encryption key security and is unique to each Azure Storage stamp, i.e., cluster of storage hardware. This key is used to perform a final wrap of the DEK that results in the following key chain hierarchy: SK(KEK(DEK)).
-These three keys are combined to protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure SSE is enabled by default and it cannot be disabled.
+These three keys are combined to protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure Storage service encryption is enabled by default and it cannot be disabled.
-**Figure 17.** Encryption flow for Storage Service Encryption
+**Figure 17.** Encryption flow for Storage service encryption
-Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage [redundancy options](../storage/common/storage-redundancy.md) support encryption and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.
+Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage [redundancy options](../storage/common/storage-redundancy.md) support encryption and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.
-Because data encryption is performed by the Storage service, server-side encryption with CMK enables customers to use any operating system types and images for their VMs. For Windows and Linux customer IaaS VMs, Azure also provides Azure Disk Encryption (ADE) that enables customers to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining SSE and ADE effectively enables double encryption of data at rest.
+Because data encryption is performed by the Storage service, server-side encryption with CMK enables customers to use any operating system types and images for their VMs. For Windows and Linux customer IaaS VMs, Azure also provides Azure Disk encryption that enables customers to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining Azure Storage service encryption and Disk encryption effectively enables double encryption of data at rest.
-#### Azure Disk Encryption (ADE)
-Azure Storage Service Encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) (ADE) may optionally be used to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of customer data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. ADE leverages the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
+#### Azure Disk encryption
+Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) may optionally be used to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of customer data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
-Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it is commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
+Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it is commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
-For managed disks, ADE allows customers to encrypt the OS and Data disks used by an IaaS Virtual Machine; however, Data cannot be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help customers control and manage the disk encryption keys. Customers can supply their own encryption keys, which are safeguarded in Azure Key Vault to support Bring Your Own Key (BYOK) scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
+For managed disks, Azure Disk encryption allows customers to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data cannot be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help customers control and manage the disk encryption keys in key vaults. Customers can supply their own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key* (BYOK) scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
-Currently, it is not possible to use on-premises key management service or standalone Hardware Security Modules (including Azure Dedicated HSM) to safeguard the encryption keys. Only Azure Key Vault service can be used to safeguard the customer-managed encryption keys for ADE.
+Azure Disk encryption is not supported by Managed HSM or an on-premises key management service. Only key vaults managed by the Azure Key Vault service can be used to safeguard customer-managed encryption keys for Azure Disk encryption.
> [!NOTE]
-> Detailed instructions are available for creating and configuring a key vault for Azure Disk Encryption with both **[Windows](../virtual-machines/windows/disk-encryption-key-vault.md)** and **[Linux](../virtual-machines/linux/disk-encryption-key-vault.md)** VMs.
+> Detailed instructions are available for creating and configuring a key vault for Azure Disk encryption with both **[Windows](../virtual-machines/windows/disk-encryption-key-vault.md)** and **[Linux](../virtual-machines/linux/disk-encryption-key-vault.md)** VMs.
-ADE relies on two encryption keys for implementation, as described previously:
+Azure Disk encryption relies on two encryption keys for implementation, as described previously:
-- *Data Encryption Key (DEK)* is a symmetric AES-256 key used to encrypt OS and Data volumes through BitLocker or DM-Crypt. DEK itself is encrypted and stored in an internal location close to the data.-- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key used to encrypt the Data Encryption Keys. KEK is kept in Azure Key Vault under customer control including granting access permissions through Azure Active Directory.
+- *Data Encryption Key (DEK)* is a symmetric AES-256 key used to encrypt OS and Data volumes through BitLocker or DM-Crypt. DEK itself is encrypted and stored in an internal location close to the data.
+- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key used to encrypt the Data Encryption Keys. KEK is kept in Azure Key Vault under customer control including granting access permissions through Azure Active Directory.
-The DEK, encrypted with the KEK, is stored separately and only an entity with access to the KEK can decrypt the DEK. Access to the KEK is guarded by Azure Key Vault where customers can choose to store their keys in [FIPS 140-2 validated Hardware Security Modules](../key-vault/keys/hsm-protected-keys-byok.md).
+The DEK, encrypted with the KEK, is stored separately and only an entity with access to the KEK can decrypt the DEK. Access to the KEK is guarded by Azure Key Vault where customers can choose to store their keys in [FIPS 140-2 validated hardware security modules](../key-vault/keys/hsm-protected-keys-byok.md).
-For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.md), ADE selects the encryption method in BitLocker based on the version of Windows, e.g., XTS-AES 256 bit for Windows Server 2012 or greater. These crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). For [Linux VMs](../virtual-machines/linux/disk-encryption-faq.md), ADE uses the decrypt default of aes-xts-plain64 with a 256-bit volume master key that is FIPS 140-2 validated as part of DM-Crypt validation obtained by suppliers of Linux IaaS VM images in Microsoft Azure Marketplace.
+For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.md), Azure Disk encryption selects the encryption method in BitLocker based on the version of Windows, e.g., XTS-AES 256 bit for Windows Server 2012 or greater. These crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). For [Linux VMs](../virtual-machines/linux/disk-encryption-faq.md), Azure Disk encryption uses the decrypt default of aes-xts-plain64 with a 256-bit volume master key that is FIPS 140-2 validated as part of DM-Crypt validation obtained by suppliers of Linux IaaS VM images in Microsoft Azure Marketplace.
##### *Server-side encryption for managed disks*
-Azure-managed disks are block-level storage volumes that are managed by Azure and used with Azure [Windows](../virtual-machines/managed-disks-overview.md) and [Linux](../virtual-machines/managed-disks-overview.md) Virtual Machines. They simplify disk management for Azure IaaS VMs by handling storage account management transparently for customers. Azure-managed disks automatically encrypt customer data by default using [256-bit AES encryption](../virtual-machines/disk-encryption.md) that is FIPS 140-2 validated. For encryption key management, customers have the following choices:
+[Azure managed disks](../virtual-machines/managed-disks-overview.md) are block-level storage volumes that are managed by Azure and used with Azure Windows and Linux virtual machines. They simplify disk management for Azure IaaS VMs by handling storage account management transparently for customers. Azure managed disks automatically encrypt customer data by default using [256-bit AES encryption](../virtual-machines/disk-encryption.md) that is FIPS 140-2 validated. For encryption key management, customers have the following choices:
-- [Platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys) is the default choice that provides transparent data encryption at rest for managed disks whereby keys are managed by Microsoft.-- [Customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) enables customers to have control over their own keys that can be imported into Azure Key Vault or generated inside Azure Key Vault. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA-2048 KEK that is stored in Azure Key Vault.
+- [Platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys) is the default choice that provides transparent data encryption at rest for managed disks whereby keys are managed by Microsoft.
+- [Customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) enables customers to have control over their own keys that can be imported into Azure Key Vault or generated inside Azure Key Vault. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA-2048 KEK that is stored in Azure Key Vault. Only key vaults can be used to safeguard customer-managed keys; managed HSMs do not support Azure Disk encryption.
-Customer-managed keys (CMK) enable customers to have full control over their data and encryption keys. Customers can grant access to managed disks in their Azure Key Vault so that their keys can be used for encrypting and decrypting the DEK. Customers can also disable their keys or revoke access to managed disks at any time. Finally, customers have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing their encryption keys.
+Customer-managed keys (CMK) enable customers to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over their encryption keys. Customers can grant access to managed disks in their Azure Key Vault so that their keys can be used for encrypting and decrypting the DEK. Customers can also disable their keys or revoke access to managed disks at any time. Finally, customers have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing their encryption keys.
-Customers are [always in control of their customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. They can access, extract, and delete their customer data stored in Azure at will. When a customer terminates their Azure subscription, Microsoft takes the necessary steps to ensure that the customer continues to own their customer data. A common customer concern upon data deletion or subscription termination is whether another customer or Azure administrator can access their deleted data. The following sections explain how data deletion, retention, and destruction works in Azure.
+Customers are [always in control of their customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. They can access, extract, and delete their customer data stored in Azure at will. When a customer terminates their Azure subscription, Microsoft takes the necessary steps to ensure that the customer continues to own their customer data. A common customer concern upon data deletion or subscription termination is whether another customer or Azure administrator can access their deleted data. The following sections explain how data deletion, retention, and destruction works in Azure.
### Data deletion
-Storage is allocated sparsely, which means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a table is created that [maps addresses on the virtual disk to areas on the physical disk](/archive/blogs/walterm/microsoft-azure-data-security-data-cleansing-and-leakage) and that table is initially empty. The first time a customer writes data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table.
+Storage is allocated sparsely, which means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a table is created that [maps addresses on the virtual disk to areas on the physical disk](/archive/blogs/walterm/microsoft-azure-data-security-data-cleansing-and-leakage) and that table is initially empty. The first time a customer writes data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table.
-When the customer deletes a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data (for customers who provisioned [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage)). At the primary location, the customer can immediately try to access the blob or entity, and they wonΓÇÖt find it in their index, since Azure provides strong consistency for the delete. So, the customer can verify directly that the data has been deleted.
+When the customer deletes a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data (for customers who provisioned [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage)). At the primary location, the customer can immediately try to access the blob or entity, and they wonΓÇÖt find it in their index, since Azure provides strong consistency for the delete. So, the customer can verify directly that the data has been deleted.
-In Azure Storage, all disk writes are sequential. This approach minimizes the number of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they are written (new versions of pointers are also written sequentially). A side effect of this design is that it is not possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there is no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file (and updating all pointers as it goes). It then deletes the oldest log file. Consequently, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
+In Azure Storage, all disk writes are sequential. This approach minimizes the number of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they are written (new versions of pointers are also written sequentially). A side effect of this design is that it is not possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there is no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file (and updating all pointers as it goes). It then deletes the oldest log file. Consequently, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
-The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process is not deterministic and there is no guarantee when particular data will be gone from physical storage. **However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:**
+The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process is not deterministic and there is no guarantee when particular data will be gone from physical storage. **However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:**
-- A customer cannot read deleted data of another customer.-- If anyone tries to read a region on a virtual disk that they have not yet written to, physical space will not have been allocated for that region and therefore only zeroes would be returned.
+- A customer cannot read deleted data of another customer.
+- If anyone tries to read a region on a virtual disk that they have not yet written to, physical space will not have been allocated for that region and therefore only zeroes would be returned.
-Customers are not provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there is no way to express a request to read from or write to a physical address that is allocated to a different customer or a physical address that is free. For more information, see the blog post on [data cleansing and leakage](/archive/blogs/walterm/microsoft-azure-data-security-data-cleansing-and-leakage).
+Customers are not provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there is no way to express a request to read from or write to a physical address that is allocated to a different customer or a physical address that is free. For more information, see the blog post on [data cleansing and leakage](/archive/blogs/walterm/microsoft-azure-data-security-data-cleansing-and-leakage).
-Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. In the case of [Azure SQL Database](../security/fundamentals/isolation-choices.md#sql-database-isolation), it is the SQL Database software that does this enforcement. For Azure Storage, it is the Azure Storage software. In the case of non-durable drives of a VM, it is the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.
+Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. In the case of [Azure SQL Database](../security/fundamentals/isolation-choices.md#sql-database-isolation), it is the SQL Database software that does this enforcement. For Azure Storage, it is the Azure Storage software. In the case of non-durable drives of a VM, it is the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.
-Finally, as described in *[Data encryption at rest](#data-encryption-at-rest)* section and depicted in Figure 16, the encryption key hierarchy relies on the Key Encryption Key (KEK) which can be kept in Azure Key Vault under customer control (i.e., customer-managed key ΓÇô CMK) and used to encrypt the Data Encryption Key (DEK), which in turns encrypts data at rest using AES-256 symmetric encryption. Data in Azure Storage is encrypted at rest by default and customers can choose to have encryption keys under their own control. In this manner, customers can also prevent access to their data stored in Azure. Moreover, since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be deleted via deletion of the KEK.
+Finally, as described in *[Data encryption at rest](#data-encryption-at-rest)* section and depicted in Figure 16, the encryption key hierarchy relies on the Key Encryption Key (KEK) which can be kept in Azure Key Vault under customer control (i.e., customer-managed key ΓÇô CMK) and used to encrypt the Data Encryption Key (DEK), which in turns encrypts data at rest using AES-256 symmetric encryption. Data in Azure Storage is encrypted at rest by default and customers can choose to have encryption keys under their own control. In this manner, customers can also prevent access to their data stored in Azure. Moreover, since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
### Data retention At all times during the term of customerΓÇÖs Azure subscription, customer has the ability to access, extract, and delete customer data stored in Azure.
-If a subscription expires or is terminated, Microsoft will preserve customer data for a 90-day retention period to permit customers to extract data or renew their subscriptions. After this retention period, Microsoft will delete all customer data within an additional 90 days, i.e., customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, customers can control how long their data is stored by timing when they end the service with Microsoft. It is recommended that customers do not terminate their service until they have extracted all data so that the initial 90-day retention period can act as a safety buffer should customers later realize they missed something.
+If a subscription expires or is terminated, Microsoft will preserve customer data for a 90-day retention period to permit customers to extract data or renew their subscriptions. After this retention period, Microsoft will delete all customer data within an additional 90 days, i.e., customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, customers can control how long their data is stored by timing when they end the service with Microsoft. It is recommended that customers do not terminate their service until they have extracted all data so that the initial 90-day retention period can act as a safety buffer should customers later realize they missed something.
-If the customer deleted an entire storage account by mistake, they should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. Customers can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it is permanently deleted. However, when a storage object (e.g., blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless the customer made a backup, deleted storage objects cannot be recovered. For Blob storage, customers can implement additional protection against accidental or erroneous modifications or deletes by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they are deleted, within a retention period specified by the customer. To avoid retention of data after storage account or subscription deletion, customers can delete storage objects individually before deleting the storage account or subscription.
+If the customer deleted an entire storage account by mistake, they should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. Customers can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it is permanently deleted. However, when a storage object (e.g., blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless the customer made a backup, deleted storage objects cannot be recovered. For Blob storage, customers can implement additional protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they are deleted, within a retention period specified by the customer. To avoid retention of data after storage account or subscription deletion, customers can delete storage objects individually before deleting the storage account or subscription.
-For accidental deletion involving Azure SQL Database, customers should check backups that the service makes automatically (e.g., full database backup is done weekly, and differential database backups are done hourly) and use point-in-time restore. Also, individual services (e.g., Azure DevOps) can have their own policies for [accidental data deletion](/azure/devops/organizations/security/data-protection#mistakes-happen).
+For accidental deletion involving Azure SQL Database, customers should check backups that the service makes automatically (e.g., full database backup is done weekly, and differential database backups are done hourly) and use point-in-time restore. Also, individual services (e.g., Azure DevOps) can have their own policies for [accidental data deletion](/azure/devops/organizations/security/data-protection#mistakes-happen).
### Data destruction
-If a disk drive used for storage suffers a hardware failure, it is securely [erased or destroyed](https://www.microsoft.com/trustcenter/privacy/data-management) before decommissioning. The data on the drive is erased to ensure that the data cannot be recovered by any means. When such devices are decommissioned, Microsoft follows the [NIST SP 800-88 R1](http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf) disposal process with data classification aligned to FIPS 199 Moderate. Magnetic, electronic, or optical media are purged or destroyed in accordance with the requirements established in NIST SP 800-88 R1 where the terms are defined as follows:
+If a disk drive used for storage suffers a hardware failure, it is securely [erased or destroyed](https://www.microsoft.com/trustcenter/privacy/data-management) before decommissioning. The data on the drive is erased to ensure that the data cannot be recovered by any means. When such devices are decommissioned, Microsoft follows the [NIST SP 800-88 R1](https://csrc.nist.gov/publications/detail/sp/800-88/rev-1/final) disposal process with data classification aligned to FIPS 199 Moderate. Magnetic, electronic, or optical media are purged or destroyed in accordance with the requirements established in NIST SP 800-88 R1 where the terms are defined as follows:
-- **Purge:** &#8220;a media sanitization process that protects the confidentiality of information against a laboratory attack&#8221;, which involves &#8220;resources and knowledge to use nonstandard systems to conduct data recovery attempts on media outside their normal operating environment&#8221; using &#8220;signal processing equipment and specially trained personnel.&#8221; Note: For hard disk drives (including ATA, SCSI, SATA, SAS, etc.) a firmware-level secure-erase command (single-pass) is acceptable, or a software-level three pass overwrite and verification (ones, zeros, random) of the entire physical media including recovery areas, if any. For solid state disks (SSD), a firmware-level secure-erase command is necessary.-- **Destroy:** &#8220;a variety of methods, including disintegration, incineration, pulverizing, shredding, and melting&#8221; after which the media &#8220;cannot be reused as originally intended.&#8221;
+- **Purge:** &#8220;a media sanitization process that protects the confidentiality of information against a laboratory attack&#8221;, which involves &#8220;resources and knowledge to use nonstandard systems to conduct data recovery attempts on media outside their normal operating environment&#8221; using &#8220;signal processing equipment and specially trained personnel.&#8221; Note: For hard disk drives (including ATA, SCSI, SATA, SAS, etc.) a firmware-level secure-erase command (single-pass) is acceptable, or a software-level three pass overwrite and verification (ones, zeros, random) of the entire physical media including recovery areas, if any. For solid state disks (SSD), a firmware-level secure-erase command is necessary.
+- **Destroy:** &#8220;a variety of methods, including disintegration, incineration, pulverizing, shredding, and melting&#8221; after which the media &#8220;cannot be reused as originally intended.&#8221;
-Purge and Destroy operations must be performed using tools and processes approved by the Microsoft Cloud + AI Security Group. Records must be kept of the erasure and destruction of assets. Devices that fail to complete the Purge successfully must be degaussed (for magnetic media only) or Destroyed.
+Purge and Destroy operations must be performed using tools and processes approved by the Microsoft Cloud + AI Security Group. Records must be kept of the erasure and destruction of assets. Devices that fail to complete the Purge successfully must be degaussed (for magnetic media only) or Destroyed.
In addition to technical implementation details that enable Azure compute, networking, and storage isolation, Microsoft has invested heavily in security assurance processes and practices to correctly develop logically isolated services and systems, as described in the next section. ## Security assurance processes and practices
-Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
+Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
-- **Security Development Lifecycle (SDL)** ΓÇô The Microsoft SDL introduces security and privacy considerations throughout all phases of the development process, helping developers build highly secure software, address security compliance requirements, and reduce development costs. The guidance, best practices, [tools](https://www.microsoft.com/securityengineering/sdl/resources), and processes in the Microsoft SDL are [practices](https://www.microsoft.com/securityengineering/sdl/practices) used internally to build all Azure services and create more secure products and services. This process is also publicly documented to share MicrosoftΓÇÖs learnings with the broader industry and incorporate industry feedback to create a stronger security development process.
+- **Security Development Lifecycle (SDL)** ΓÇô The Microsoft SDL introduces security and privacy considerations throughout all phases of the development process, helping developers build highly secure software, address security compliance requirements, and reduce development costs. The guidance, best practices, [tools](https://www.microsoft.com/securityengineering/sdl/resources), and processes in the Microsoft SDL are [practices](https://www.microsoft.com/securityengineering/sdl/practices) used internally to build all Azure services and create more secure products and services. This process is also publicly documented to share MicrosoftΓÇÖs learnings with the broader industry and incorporate industry feedback to create a stronger security development process.
- **Tooling and processes** ΓÇô All Azure code is subject to an extensive set of both static and dynamic analysis tools that identify potential vulnerabilities, ineffective security patterns, memory corruption, user privilege issues, and other critical security problems.
- - *Purpose built fuzzing* ΓÇô A testing technique used to find security vulnerabilities in software products and services. It consists of repeatedly feeding modified, or fuzzed, data to software inputs to trigger hangs, exceptions, and crashes, i.e., fault conditions that could be leveraged by an attacker to disrupt or take control of applications and services. The Microsoft SDL recommends [fuzzing](https://www.microsoft.com/research/blog/a-brief-introduction-to-fuzzing-and-why-its-an-important-tool-for-developers/) all attack surfaces of a software product, especially those surfaces that expose a data parser to untrusted data.
- - *Live-site penetration testing* ΓÇô Microsoft conducts [ongoing live-site penetration testing](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e) to improve cloud security controls and processes, as part of the Red Teaming program described later in this section. Penetration testing is a security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. The tests are conducted against Azure infrastructure and platforms as well as MicrosoftΓÇÖs own tenants, applications, and data. Customer tenants, applications, and data hosted in Azure are never targeted; however, customers can conduct [their own penetration testing](../security/fundamentals/pen-testing.md) of their applications deployed in Azure.
- - *Threat modeling* ΓÇô A core element of the Microsoft SDL. ItΓÇÖs an engineering technique used to help identify threats, attacks, vulnerabilities, and countermeasures that could affect applications and services. [Threat modeling](../security/develop/threat-modeling-tool-getting-started.md) is part of the Azure routine development lifecycle.
- - *Automated build alerting of changes to attack surface area* ΓÇô [Attack Surface Analyzer](https://github.com/microsoft/attacksurfaceanalyzer) is a Microsoft-developed open-source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. The core feature of Attack Surface Analyzer is the ability to &#8220;diff&#8221; an operating system's security configuration, before and after a software component is installed. This feature is important because most installation processes require elevated privileges, and once granted, can lead to unintended system configuration changes.
-- **Mandatory security training** ΓÇô The Microsoft Azure security training and awareness program requires all personnel responsible for Azure development and operations to take essential training as well as any additional training based on individual job requirements. These procedures provide a standard approach, tools, and techniques used to implement and sustain the awareness program. Microsoft has implemented a security awareness program called STRIKE that provides monthly e-mail communication to all Azure engineering personnel about security awareness and allows employees to register for in-person or online security awareness training. STRIKE offers a series of security training events throughout the year, as well as STRIKE Central, which is a centralized online resource for security awareness, training, documentation, and community engagement.-- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for customers and their data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (e.g., encryption, spoofing, hypervisor isolation, elevation of privileges, etc.) to better protect AzureΓÇÖs infrastructure and customer data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $300,000.-- **Red Team activities** ΓÇô Microsoft utilizes [Red Teaming](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve the security of Azure. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
+ - *Purpose built fuzzing* ΓÇô A testing technique used to find security vulnerabilities in software products and services. It consists of repeatedly feeding modified, or fuzzed, data to software inputs to trigger hangs, exceptions, and crashes, i.e., fault conditions that could be leveraged by an attacker to disrupt or take control of applications and services. The Microsoft SDL recommends [fuzzing](https://www.microsoft.com/research/blog/a-brief-introduction-to-fuzzing-and-why-its-an-important-tool-for-developers/) all attack surfaces of a software product, especially those surfaces that expose a data parser to untrusted data.
+ - *Live-site penetration testing* ΓÇô Microsoft conducts [ongoing live-site penetration testing](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf) to improve cloud security controls and processes, as part of the Red Teaming program described later in this section. Penetration testing is a security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. The tests are conducted against Azure infrastructure and platforms as well as MicrosoftΓÇÖs own tenants, applications, and data. Customer tenants, applications, and data hosted in Azure are never targeted; however, customers can conduct [their own penetration testing](../security/fundamentals/pen-testing.md) of their applications deployed in Azure.
+ - *Threat modeling* ΓÇô A core element of the Microsoft SDL. ItΓÇÖs an engineering technique used to help identify threats, attacks, vulnerabilities, and countermeasures that could affect applications and services. [Threat modeling](../security/develop/threat-modeling-tool-getting-started.md) is part of the Azure routine development lifecycle.
+ - *Automated build alerting of changes to attack surface area* ΓÇô [Attack Surface Analyzer](https://github.com/microsoft/attacksurfaceanalyzer) is a Microsoft-developed open-source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. The core feature of Attack Surface Analyzer is the ability to &#8220;diff&#8221; an operating system's security configuration, before and after a software component is installed. This feature is important because most installation processes require elevated privileges, and once granted, can lead to unintended system configuration changes.
+- **Mandatory security training** ΓÇô The Microsoft Azure security training and awareness program requires all personnel responsible for Azure development and operations to take essential training as well as any additional training based on individual job requirements. These procedures provide a standard approach, tools, and techniques used to implement and sustain the awareness program. Microsoft has implemented a security awareness program called STRIKE that provides monthly e-mail communication to all Azure engineering personnel about security awareness and allows employees to register for in-person or online security awareness training. STRIKE offers a series of security training events throughout the year, as well as STRIKE Central, which is a centralized online resource for security awareness, training, documentation, and community engagement.
+- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for customers and their data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (e.g., encryption, spoofing, hypervisor isolation, elevation of privileges, etc.) to better protect AzureΓÇÖs infrastructure and customer data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $300,000.
+- **Red Team activities** ΓÇô Microsoft utilizes [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve the security of Azure. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
-When migrating to the cloud, customers accustomed to traditional on-premises data center deployment will usually conduct a risk assessment to gauge their threat exposure and formulate mitigating measures. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help customers with this comparison.
+When migrating to the cloud, customers accustomed to traditional on-premises data center deployment will usually conduct a risk assessment to gauge their threat exposure and formulate mitigating measures. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help customers with this comparison.
## Logical isolation considerations
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep customers from accessing one another's data or applications. This section addresses concerns common to customers who are migrating from traditional on-premises physically isolated infrastructure to the cloud.
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep customers from accessing one another's data or applications. This section addresses concerns common to customers who are migrating from traditional on-premises physically isolated infrastructure to the cloud.
### Physical versus logical security considerations
-Table 6 provides a summary of key security considerations for physically isolated on-premises deployments (e.g., bare metal) versus logically isolated cloud-based deployments (e.g., Azure). ItΓÇÖs useful to review these considerations prior to examining risks identified to be specific to shared cloud environments.
+Table 6 provides a summary of key security considerations for physically isolated on-premises deployments (e.g., bare metal) versus logically isolated cloud-based deployments (e.g., Azure). ItΓÇÖs useful to review these considerations prior to examining risks identified to be specific to shared cloud environments.
**Table 6.** Key security considerations for physical versus logical isolation
Table 6 provides a summary of key security considerations for physically isolate
Listed below are key risks that are unique to shared cloud environments that may need to be addressed when accommodating sensitive data and workloads. ### Exploitation of vulnerabilities in virtualization technologies
-Compared to traditional on-premises hosted systems, Azure provides a greatly **reduced attack surface** by using a locked-down Windows Server core for the Host OS layered over the Hypervisor. Moreover, by default, guest PaaS VMs do not have any user accounts to accept incoming remote connections and the default Windows administrator account is disabled. Customer software in PaaS VMs is restricted by default to running under a low-privilege account, which helps protect customerΓÇÖs service from attacks by its own end users. These permissions can be modified by customers, and they can also choose to configure their VMs to allow remote administrative access.
+Compared to traditional on-premises hosted systems, Azure provides a greatly **reduced attack surface** by using a locked-down Windows Server core for the Host OS layered over the Hypervisor. Moreover, by default, guest PaaS VMs do not have any user accounts to accept incoming remote connections and the default Windows administrator account is disabled. Customer software in PaaS VMs is restricted by default to running under a low-privilege account, which helps protect customerΓÇÖs service from attacks by its own end users. These permissions can be modified by customers, and they can also choose to configure their VMs to allow remote administrative access.
-PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it is a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it much more difficult for a compromise to persist.
+PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it is a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it much more difficult for a compromise to persist.
-When VMs belonging to different customers are running on the same physical server, it is the HypervisorΓÇÖs job to ensure that they cannot learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as **side-channel attacks**, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. There are several mitigations in Azure that reduce the risk of such an attack:
+When VMs belonging to different customers are running on the same physical server, it is the HypervisorΓÇÖs job to ensure that they cannot learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as **side-channel attacks**, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. There are several mitigations in Azure that reduce the risk of such an attack:
-- The standard Azure cryptographic libraries have been designed to resist such attacks by not having cache access patterns depend on the cryptographic keys being used.-- Azure uses an advanced VM host placement algorithm that is highly sophisticated and nearly impossible to predict, which helps reduce the chances of adversary-controlled VM being placed on the same host as the target VM.-- All Azure servers have at least eight physical cores and some have many more. Increasing the number of cores that share the load placed by various VMs adds noise to an already weak signal.-- Customers can provision Virtual Machines on hardware dedicated to a single customer by using [Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) or [Isolated VMs](../virtual-machines/isolation.md), as described in *[Physical isolation](#physical-isolation)* section.
+- The standard Azure cryptographic libraries have been designed to resist such attacks by not having cache access patterns depend on the cryptographic keys being used.
+- Azure uses an advanced VM host placement algorithm that is highly sophisticated and nearly impossible to predict, which helps reduce the chances of adversary-controlled VM being placed on the same host as the target VM.
+- All Azure servers have at least eight physical cores and some have many more. Increasing the number of cores that share the load placed by various VMs adds noise to an already weak signal.
+- Customers can provision VMs on hardware dedicated to a single customer by using [Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) or [Isolated VMs](../virtual-machines/isolation.md), as described in *[Physical isolation](#physical-isolation)* section.
-Overall, PaaS (or any workload that autocreates VMs) contributes to churn in VM placement that leads to randomized VM allocation. Random placement of customer VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.
+Overall, PaaS (or any workload that autocreates VMs) contributes to churn in VM placement that leads to randomized VM allocation. Random placement of customer VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.
## Summary
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles: -- User access controls with authentication and identity separation that leverages Azure Active Directory and Azure role-based access control (Azure RBAC)-- Compute isolation for processing, including both logical and physical compute isolation-- Networking isolation including separation of network traffic and data encryption in transit-- Storage isolation with data encryption at rest using advanced algorithms with multiple ciphers and encryption keys, as well as provisions for customer-managed keys (CMK) under customer control in Azure Key Vault-- Security assurance processes embedded in service design to correctly develop logically isolated services, including Security Development Lifecycle (SDL) and other strong security assurance processes to protect attack surfaces and mitigate risks
+- User access controls with authentication and identity separation that leverages Azure Active Directory and Azure role-based access control (Azure RBAC).
+- Compute isolation for processing, including both logical and physical compute isolation.
+- Networking isolation including separation of network traffic and data encryption in transit.
+- Storage isolation with data encryption at rest using advanced algorithms with multiple ciphers and encryption keys, as well as provisions for customer-managed keys (CMK) under customer control in Azure Key Vault.
+- Security assurance processes embedded in service design to correctly develop logically isolated services, including Security Development Lifecycle (SDL) and other strong security assurance processes to protect attack surfaces and mitigate risks.
-In line with the shared responsibility model in cloud computing, this article provides customer guidance for activities that are part of customer responsibility. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
+In line with the shared responsibility model in cloud computing, this article provides customer guidance for activities that are part of the customer responsibility. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
## Next steps+ Learn more about:+ - [Azure Security](../security/fundamentals/overview.md) - [Azure Compliance](../compliance/index.yml) - [Azure Government developer guidance](./documentation-government-developer-guide.md)
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
Last updated 02/17/2021
# Compare Azure Government and global Azure
-Microsoft Azure Government uses same underlying technologies as global Azure, which includes the core components of [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/). Both Azure and Azure Government have the same comprehensive security controls in place, as well as the same Microsoft commitment on the safeguarding of customer data. Whereas both cloud environments are assessed and authorized at the FedRAMP High impact level, Azure Government provides an additional layer of protection to customers through contractual commitments regarding storage of Customer Data in the United States and limiting potential access to systems processing Customer Data to screened US persons. These commitments may be of interest to customers using the cloud to store or process data subject to US export control regulations such as the EAR, ITAR, and DoE 10 CFR Part 810.
+Microsoft Azure Government uses same underlying technologies as global Azure, which includes the core components of [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/). Both Azure and Azure Government have the same comprehensive security controls in place, as well as the same Microsoft commitment on the safeguarding of customer data. Whereas both cloud environments are assessed and authorized at the FedRAMP High impact level, Azure Government provides an additional layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to screened US persons. These commitments may be of interest to customers using the cloud to store or process data subject to US export control regulations such as the EAR, ITAR, and DoE 10 CFR Part 810.
### Export control implications
-Customers are responsible for designing and deploying their applications to meet [export control requirements](./documentation-government-overview-itar.md) such as those prescribed in the EAR and ITAR. In doing so, customers should not include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md). Data stored or processed in customer VMs, storage accounts, databases, Azure Import/Export, Azure Cache for Redis, ExpressRoute, Azure Cognitive Search, App Service, API Management, and other Azure services suitable for holding, processing, or transmitting customer data can contain export-controlled data. However, metadata for these Azure services is not permitted to contain export-controlled data. This metadata includes all configuration data entered when creating and maintaining an Azure service, including subscription names, service names, server names, database names, tenant role names, resource groups, deployment names, resource names, resource tags, circuit name, etc. It also includes all shipping information that is used to transport media for Azure Import/Export, such as carrier name, tracking number, description, return information, drive list, package list, storage account name, container name, etc. Sensitive data should not be included in HTTP headers sent to the REST API in search/query strings as part of the API.
+Customers are responsible for designing and deploying their applications to meet [export control requirements](./documentation-government-overview-itar.md) such as those prescribed in the EAR and ITAR. In doing so, customers should not include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md). Data stored or processed in customer VMs, storage accounts, databases, Azure Import/Export, Azure Cache for Redis, ExpressRoute, Azure Cognitive Search, App Service, API Management, and other Azure services suitable for holding, processing, or transmitting customer data can contain export-controlled data. However, metadata for these Azure services is not permitted to contain export-controlled data. This metadata includes all configuration data entered when creating and maintaining an Azure service, including subscription names, service names, server names, database names, tenant role names, resource groups, deployment names, resource names, resource tags, circuit name, etc. It also includes all shipping information that is used to transport media for Azure Import/Export, such as carrier name, tracking number, description, return information, drive list, package list, storage account name, container name, etc. Sensitive data should not be included in HTTP headers sent to the REST API in search/query strings as part of the API.
### Guidance for developers
-Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For detailed information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you will mostly have the same experience as in global Azure. Table below lists API endpoints in Azure vs. Azure Government for accessing and managing various services.
+Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For detailed information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you will mostly have the same experience as in global Azure. Table below lists API endpoints in Azure vs. Azure Government for accessing and managing various services.
|Service category|Service name|Azure Public|Azure Government|Notes| |--|--|-|-|-|
Microsoft's goal is to enable 100% parity in service availability between Azure
In general, service availability in Azure Government implies that all corresponding service features are available to customers. Variations to this approach and other applicable limitations are tracked and explained in this article based on the main service categories outlined in the [online directory of Azure services](https://azure.microsoft.com/services/). Additional considerations for service deployment and usage in Azure Government are also provided. ## AI + Machine Learning
-This section outlines variations and considerations when using **Azure Bot Service**, **Azure Machine Learning**, and **Cognitive Services** in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service,bot-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using **Azure Bot Service**, **Azure Machine Learning**, and **Cognitive Services** in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service,bot-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [Azure Bot Service](/azure/bot-service/) The following Azure Bot Service **features are not currently available** in Azure Government:
The following Translator **features are not currently available** in Azure Gover
## Analytics
-This section outlines variations and considerations when using Analytics services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Analytics services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks)
The following Power BI Embedded **features are not yet available** in Azure Gove
## Compute
-This section outlines variations and considerations when using Compute services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,azure-vmware-cloudsimple,cloud-services,batch,container-instances,app-service,service-fabric,functions,kubernetes-service,virtual-machine-scale-sets,virtual-machines&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Compute services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,azure-vmware-cloudsimple,cloud-services,batch,container-instances,app-service,service-fabric,functions,kubernetes-service,virtual-machine-scale-sets,virtual-machines&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [Virtual Machines](../virtual-machines/sizes.md) The following Virtual Machines **features are not currently available** in Azure Government:
The following Azure Database for PostgreSQL **features are not currently availab
- Advanced Threat Protection - Private endpoint connections
+- Hyperscale (Citus) and Flexible Server deployment options
## Developer Tools
-This section outlines variations and considerations when using Developer Tools services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=app-configuration,devtest-lab,lab-services,azure-devops&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Developer Tools services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=app-configuration,devtest-lab,lab-services,azure-devops&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [Azure DevTest Labs](../devtest-labs/devtest-lab-overview.md) The following Azure DevTest Labs **features are not currently available** in Azure Government:
If you are using the IoT Hub connection string (instead of the Event Hub-compati
## Management and Governance
-This section outlines variations and considerations when using Management and Governance services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-applications,azure-policy,network-watcher,monitor,traffic-manager,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Management and Governance services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-applications,azure-policy,network-watcher,monitor,traffic-manager,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
> [!NOTE] >This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM compatibility, see [**Introducing the new Azure PowerShell Az module**](/powershell/azure/new-azureps-module-az?preserve-view=true&view=azps-3.3.0). For Az module installation instructions, see [**Install Azure PowerShell**](/powershell/azure/install-az-ps?preserve-view=true&view=azps-3.3.0).
The following Azure Lighthouse **features are not currently available** in Azure
### [Azure Monitor](../azure-monitor/logs/data-platform-logs.md) The following Azure Monitor **features are not currently available** in Azure Government:+ - Solutions that are in preview in Microsoft Azure, including: - Windows 10 Upgrade Analytics solution - Application Insights solution
The following Azure Monitor **features are not currently available** in Azure Go
- Azure metrics and Azure diagnostics The following Azure Monitor **features behave differently** in Azure Government:+ - To connect your System Center Operations Manager management group to Azure Monitor logs, you need to download and import updated management packs. - System Center Operations Manager 2016 1. Install [Update Rollup 2 for System Center Operations Manager 2016](https://support.microsoft.com/help/3209591).
The following Azure Monitor **features behave differently** in Azure Government:
### [Azure Advisor](../advisor/advisor-overview.md) The following Azure Advisor recommendation **features are not currently available** in Azure Government:+ - High Availability - Configure your VPN gateway to active-active for connection resilience - Create Azure Service Health alerts to be notified when Azure issues affect you
If you want to be more aggressive at identifying underutilized virtual machines,
## Media This section outlines variations and considerations when using Media services in the Azure Government environment.
-For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). For Azure Media Services v3 availability, see [Azure clouds and regions in which Media Services v3 exists](../media-services/latest/azure-clouds-regions.md).
+For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). For Azure Media Services v3 availability, see [Azure clouds and regions in which Media Services v3 exists](../media-services/latest/azure-clouds-regions.md).
### [Media Services](../media-services/previous/index.yml) For information on how to connect to Media Services v2, see [Access the Azure Media Services API with Azure AD authentication](../media-services/previous/media-services-use-aad-auth-to-access-ams-api.md). The following Media Services **features are not currently available** in Azure Government:+ - Analyzing ΓÇô the Azure Media Indexer 2 Preview Azure Media Analytics media processor is not available in Azure Government. - CDN integration ΓÇô there is no CDN integration with streaming endpoints in Azure Government data centers.
For more information, see [Create a Video Indexer account](../media-services/vid
## Migration
-This section outlines variations and considerations when using Migration services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,cost-management,azure-migrate,site-recovery&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Migration services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,cost-management,azure-migrate,site-recovery&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [Azure Migrate](../migrate/migrate-services-overview.md) The following Azure Migrate **features are not currently available** in Azure Government:+ - Dependency visualization functionality as Azure Migrate depends on Service Map for dependency visualization which is currently unavailable in Azure Government. - You can only create assessments for Azure Government as target regions and using Azure Government offers. ## Networking
-This section outlines variations and considerations when using Networking services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-bastion,frontdoor,virtual-wan,dns,ddos-protection,cdn,azure-firewall,network-watcher,load-balancer,vpn-gateway,expressroute,application-gateway,virtual-network&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Networking services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-bastion,frontdoor,virtual-wan,dns,ddos-protection,cdn,azure-firewall,network-watcher,load-balancer,vpn-gateway,expressroute,application-gateway,virtual-network&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [Azure ExpressRoute](../expressroute/index.yml)
-Azure ExpressRoute is used to create private connections between Azure Government datacenters and customer's on-premises infrastructure or a colocation facility. ExpressRoute connections do not go over the public InternetΓÇöthey offer optimized pathways (shortest hops, lowest latency, highest performance, etc.) and Azure Government geo-redundant regions.
+Azure ExpressRoute is used to create private connections between Azure Government datacenters and customer's on-premises infrastructure or a colocation facility. ExpressRoute connections do not go over the public Internet ΓÇö they offer optimized pathways (shortest hops, lowest latency, highest performance, etc.) and Azure Government geo-redundant regions.
- By default, all Azure Government ExpressRoute connectivity is configured active-active redundant with support for bursting, and it delivers up to 10 G circuit capacity (smallest is 50 MB). - Microsoft owns and operates all fiber infrastructure between Azure Government regions and Azure Government ExpressRoute Meet-Me locations. - Azure Government ExpressRoute provides connectivity to Microsoft Azure, Microsoft 365, and Dynamics 365 cloud services.
-Aside from ExpressRoute, customers can also use an [IPSec protected VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) (site-to-site for a typical organization) to connect securely from their on-premises infrastructure to Azure Government. For network services to support Azure Government customer applications and solutions, it is strongly recommended that ExpressRoute (private connectivity) is implemented to connect to Azure Government. If VPN connections are used, the following should be considered:
+Aside from ExpressRoute, customers can also use an [IPSec protected VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) (site-to-site for a typical organization) to connect securely from their on-premises infrastructure to Azure Government. For network services to support Azure Government customer applications and solutions, it is strongly recommended that ExpressRoute (private connectivity) is implemented to connect to Azure Government. If VPN connections are used, the following should be considered:
- Customers should contact their authorizing official/agency to determine whether private connectivity or other secure connection mechanism is required and to identify any additional restrictions to consider. - Customers should decide whether to mandate that the site-to-site VPN is routed through a private connectivity zone.
All customers who utilize a private connectivity architecture should validate th
### BGP communities This section provides an overview of how BGP communities are used with ExpressRoute in Azure Government. Microsoft advertises routes in the public peering and Microsoft peering paths, with routes tagged with appropriate community values. The rationale for doing so and the details on community values are described below.
-If you are connecting to Microsoft through ExpressRoute at any one peering location within the Azure Government region, you will have access to all Microsoft cloud services across all regions within the government boundary. For example, if you connected to Microsoft in Washington D.C. through ExpressRoute, you would have access to all Microsoft cloud services hosted in Azure Government. [ExpressRoute overview](../expressroute/expressroute-introduction.md) provides details on locations and partners, as well as a list of peering locations for Azure Government.
+If you are connecting to Microsoft through ExpressRoute at any one peering location within the Azure Government region, you will have access to all Microsoft cloud services across all regions within the government boundary. For example, if you connected to Microsoft in Washington D.C. through ExpressRoute, you would have access to all Microsoft cloud services hosted in Azure Government. [ExpressRoute overview](../expressroute/expressroute-introduction.md) provides details on locations and partners, as well as a list of peering locations for Azure Government.
You can purchase more than one ExpressRoute circuit. Having multiple connections offers you significant benefits on high availability due to geo-redundancy. In cases where you have multiple ExpressRoute circuits, you will receive the same set of prefixes advertised from Microsoft on the public peering and Microsoft peering paths. This means you will have multiple paths from your network into Microsoft. This can potentially cause sub-optimal routing decisions to be made within your network. As a result, you may experience sub-optimal connectivity experiences to different services.
Traffic Manager health checks can originate from certain IP addresses for Azure
## Security
-This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [Azure Active Directory Premium P1 and P2](../active-directory/index.yml) The following features have known limitations in Azure Government:
The following features have known limitations in Azure Government:
- Azure AD SSPR from Windows 10 login screen is not available ### [Azure Information Protection](/azure/information-protection/what-is-information-protection)
-Azure Information Protection Premium is part of the [Enterprise Mobility + Security](/enterprise-mobility-security) suite. For details on this service and how to use it, see the [Azure Information Protection Premium Government Service Description](/enterprise-mobility-security/solutions/ems-aip-premium-govt-service-description).
+Azure Information Protection Premium is part of the [Enterprise Mobility + Security](/enterprise-mobility-security) suite. For details on this service and how to use it, see the [Azure Information Protection Premium Government Service Description](/enterprise-mobility-security/solutions/ems-aip-premium-govt-service-description).
### [Azure Security Center](../security-center/security-center-introduction.md) The following Azure Security Center **features are not currently available** in Azure Government:
For information about EMS suite capabilities in Azure Government, see the [Enter
## Storage
-This section outlines variations and considerations when using Storage services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,backup,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Storage services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,backup,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [Azure Storage](../storage/index.yml) For a Quickstart that will help you get started with Storage in Azure Government, see [Develop with Storage API on Azure Government](./documentation-government-get-started-connect-to-storage.md). **Storage pairing in Azure Government**
-Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md). The following table shows the primary and secondary region pairings in Azure Government.
+Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md). The following table shows the primary and secondary region pairings in Azure Government.
|Geography|Regional Pair A|Regional Pair B| ||||
When you're deploying the **StorSimple** Manager service, use the [https://porta
### [Azure Import/Export](../import-export/storage-import-export-service.md) With Import/Export jobs for US Gov Arizona or US Gov Texas, the mailing address is for US Gov Virginia. The data is loaded into selected storage accounts from the US Gov Virginia region.
-For DoD IL5 data, use a DoD region storage account to ensure that data is loaded directly into the DoD regions. For more information, see [Azure Import/Export IL5 isolation guidance](./documentation-government-impact-level-5.md#azure-importexport-service).
+For DoD IL5 data, use a DoD region storage account to ensure that data is loaded directly into the DoD regions. For more information, see [Azure Import/Export IL5 isolation guidance](./documentation-government-impact-level-5.md#azure-importexport-service).
For all jobs, we recommend that you rotate your storage account keys after the job is complete to remove any access granted during the process. For more information, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md). ## Web
-This section outlines variations and considerations when using Web services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,signalr-service,api-management,notification-hubs,search,cdn,app-service-linux,app-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Web services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,signalr-service,api-management,notification-hubs,search,cdn,app-service-linux,app-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
### [API Management](../api-management/index.yml) The following API Management **features are not currently available** in Azure Government:
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-itar.md
Title: Azure support for export controls
description: Customer guidance for Azure export control support Previously updated : 02/12/2021 Last updated : 02/25/2021
The US Department of Commerce is responsible for enforcing the [Export Administr
The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) are not exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements would not apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140-2 validated cryptographic modules and not intentionally stored in a military-embargoed country (that is, Country Group D:5 as described in [Supplement No. 1 to Part 740](https://ecfr.io/Title-15/pt15.2.740#ap15.2.740_121.1) of the EAR) or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the ΓÇ£exporterΓÇ¥ who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
-Both Azure and Azure Government can help customers subject to the EAR meet their compliance requirements. Except for the Hong Kong region, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
+Both Azure and Azure Government can help customers subject to the EAR meet their compliance requirements. Except for the Hong Kong region, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
-Customers are responsible for choosing Azure or Azure Government regions for deploying their applications and data. Moreover, customers are responsible for designing their applications to apply end-to-end data encryption that meets EAR requirements. Microsoft does not inspect or approve customer applications deployed on Azure or Azure Government.
+Customers are responsible for choosing Azure or Azure Government regions for deploying their applications and data. Moreover, customers are responsible for designing their applications to apply end-to-end data encryption that meets EAR requirements. Microsoft does not inspect or approve customer applications deployed on Azure or Azure Government.
-Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to screened U.S. persons.
+Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening).
## ITAR The US Department of State has export control authority over defense articles, services, and related technologies under the [International Traffic in Arms Regulations](https://www.ecfr.gov/cgi-bin/text-idx?SID=8870638858a2595a32dedceb661c482c&mc=true&tpl=/ecfrbrowse/Title22/22CIsubchapM.tpl) (ITAR) managed by the [Directorate of Defense Trade Controls](http://www.pmddtc.state.gov/) (DDTC). Items under ITAR protection are documented on the [United States Munitions List](https://www.ecfr.gov/cgi-bin/text-idx?rgn=div5&node=22:1.0.1.13.58) (USML). Customers who are manufacturers, exporters, and brokers of defense articles, services, and related technologies as defined on the USML must be registered with DDTC, must understand and abide by ITAR, and must self-certify that they operate in accordance with ITAR.
-DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the Commerce Department adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that do not constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140-2 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://ecfr.io/Title-22/pt22.1.126#se22.1.126_11) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet is not deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party.
+DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the Commerce Department adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that do not constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140-2 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://ecfr.io/Title-22/pt22.1.126#se22.1.126_11) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet is not deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party.
There is no ITAR compliance certification; however, both Azure and Azure Government can help customers subject to ITAR meet their compliance obligations. Except for the Hong Kong region, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
-Customers are responsible for choosing Azure or Azure Government regions for deploying their applications and data. Moreover, customers are responsible for designing their applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft does not inspect or approve customer applications deployed on Azure or Azure Government.
+Customers are responsible for choosing Azure or Azure Government regions for deploying their applications and data. Moreover, customers are responsible for designing their applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft does not inspect or approve customer applications deployed on Azure or Azure Government.
-Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to screened US persons.
+Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening).
## DoE 10 CFR Part 810 The US Department of Energy (DoE) export control regulation [10 CFR Part 810](http://www.gpo.gov/fdsys/pkg/FR-2015-02-23/pdf/2015-03479.pdf) implements section 57b.(2) of the [Atomic Energy Act of 1954](https://www.nrc.gov/docs/ML1327/ML13274A489.pdf) (AEA), as amended by section 302 of the [Nuclear Nonproliferation Act of 1978](http://www.nrc.gov/docs/ML1327/ML13274A492.pdf#page=19) (NNPA). It is administered by the [National Nuclear Security Administration](https://www.energy.gov/nnsa/national-nuclear-security-administration) (NNSA). The revised Part 810 (final rule) became effective on 25 March 2015, and, among other things, it controls the export of unclassified nuclear technology and assistance. It enables peaceful nuclear trade by helping to assure that nuclear technologies exported from the United States will not be used for non-peaceful purposes. Paragraph 810.7 (b) states that specific DoE authorization is required for providing or transferring sensitive nuclear technology to any foreign entity.
-**Azure Government can accommodate customers subject to DoE 10 CFR Part 810** export control requirements because it is designed to meet specific controls that restrict access to information and systems to US persons among Azure operations personnel. Customers deploying data to Azure Government are responsible for their own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA.
+**Azure Government can accommodate customers subject to DoE 10 CFR Part 810** export control requirements because it is designed to meet specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. Customers deploying data to Azure Government are responsible for their own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA.
## NRC 10 CFR Part 110
The [Nuclear Regulatory Commission](https://www.nrc.gov/) (NRC) is responsible f
The [Office of Foreign Assets Control](https://www.treasury.gov/about/organizational-structure/offices/Pages/Office-of-Foreign-Assets-Control.aspx) (OFAC) is responsible for administering and enforcing economic and trade sanctions based on US foreign policy and national security goals against targeted foreign countries, terrorists, international narcotics traffickers, and those entities engaged in activities related to the proliferation of weapons of mass destruction.
-The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempted by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies for example that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
+The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempted by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies for example that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
-As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa), &#8220;Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries, for example, a sanctions target is not allowed to provision Azure services. OFAC has not issued guidance (similar to the guidance provided by BIS for Export Administration Regulations) that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be the **responsibility of Microsoft customers to exclude sanctions targets from online transactions** involving customer applications (including web sites) deployed on Azure. Azure does not block network traffic to customer sites. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach does not fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft is not responsible for and does not have the means to know directly the end users that interact with applications deployed by customers on Azure.
+As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa), &#8220;Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries, for example, a sanctions target is not allowed to provision Azure services. OFAC has not issued guidance (similar to the guidance provided by BIS for the Export Administration Regulations) that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be the **responsibility of Microsoft customers to exclude sanctions targets from online transactions** involving customer applications (including web sites) deployed on Azure. Azure does not block network traffic to customer sites. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach does not fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft is not responsible for and does not have the means to know directly the end users that interact with applications deployed by customers on Azure.
OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, etc. OFAC sanctions are not about preventing a resident of a proscribed country from viewing a customerΓÇÖs public web site.
OFAC sanctions are in place to prevent &#8220;conducting business with a sanctio
Customers should assess carefully how their use of Azure may implicate US export controls and determine whether any of the data they want to store or process in the cloud may be subject to export controls. Microsoft provides customers with contractual commitments, operational processes, and technical features to help them meet their export control obligations when using Azure. The following Azure features are available to customers to manage potential export control risks: -- **Ability to control data location** - Customers have visibility as to where their data is stored, and robust tools to restrict data storage to a single geography, region, or country. For example, a customer may therefore ensure that data is stored in the United States or their country of choice and minimize transfer of controlled technology/technical data outside the target country. Customer data is not *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.
+- **Ability to control data location** - Customers have visibility as to where their [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, a customer may therefore ensure that data is stored in the United States or their country of choice and minimize transfer of controlled technology/technical data outside the target country. Customer data is not *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.
- **End-to-end encryption** - Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party. Azure relies on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provides customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control ([customer-managed keys](../security/fundamentals/encryption-models.md), CMK). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys. - **Control over access to data** - Customers can know and control who can access their data and on what terms. Microsoft technical support personnel do not need and do not have default access to customer data. For those rare instances where resolving customer support requests requires elevated access to customer data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts customers in charge of approving or denying customer data access requests. - **Tools and protocols to prevent unauthorized deemed export/re-export** - Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export (or deemed re-export), because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who cannot read or understand the data while it is encrypted and thus there is no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption.
Azure has extensive support to safeguard customer data using [data encryption](.
- Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware. - Client-side encryption that enables customers to manage and store keys on-premises or in another secure location.
-Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
+Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
### FIPS 140-2 validated cryptography
The [Federal Information Processing Standard (FIPS) 140-2](https://csrc.nist.gov
Microsoft maintains an active commitment to meeting the [FIPS 140-2 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the NIST [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program) (CMVP). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
-While the current CMVP FIPS 140-2 implementation guidance precludes a FIPS 140-2 validation for a cloud service, cloud service providers can obtain and operate FIPS 140-2 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL), all Azure services use FIPS 140-2 approved algorithms for data security because the operating system uses FIPS 140-2 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, Azure customers can store their own cryptographic keys and other secrets in FIPS 140-2 validated hardware security modules (HSM).
+While the current CMVP FIPS 140-2 implementation guidance precludes a FIPS 140-2 validation for a cloud service, cloud service providers can obtain and operate FIPS 140-2 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL), all Azure services use FIPS 140-2 approved algorithms for data security because the operating system uses FIPS 140-2 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, Azure customers can store their own cryptographic keys and other secrets in FIPS 140-2 validated hardware security modules (HSMs).
### Encryption key management
-Proper protection and management of encryption keys is essential for data security. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables customers to store their encryption keys in hardware security modules (HSMs) that are FIPS 140-2 validated. For more information, see [Data encryption key management with Azure Key Vault](./azure-secure-isolation-guidance.md#azure-key-vault).
+Proper protection and management of encryption keys is essential for data security. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables customers to store their encryption keys in hardware security modules (HSMs) that are FIPS 140-2 validated. For more information, see [Data encryption key management](./azure-secure-isolation-guidance.md#data-encryption-key-management).
### Data encryption in transit
Azure SQL Database provides [transparent data encryption](../azure-sql/database/
## Restrictions on insider access
-Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to customerΓÇÖs systems and data. Microsoft provides strong [customer commitments](https://www.microsoft.com/trust-center/privacy/data-access) regarding who can access customer data and on what terms. Access to customer data by Microsoft operations and support personnel is denied by default. Access to customer data is not needed to operate Azure. Moreover, for most support scenarios involving customer troubleshooting tickets, access to customer data is not needed.
+All Azure and Azure Government employees in the United States are subject to Microsoft background checks. For more information, see [Screening](./documentation-government-plan-security.md#screening).
-No default access rights and Just-in-Time (JIT) access provisions reduce greatly the risks associated with traditional on-premises administrator elevated access rights that typically persist throughout the duration of employment. Microsoft makes it considerably more difficult for malicious insiders to tamper with customer applications and data. The same access control restrictions and processes are imposed on all Microsoft engineers, including both full-time employees and subprocessors/vendors. The following controls are in place to restrict insider access to customer data:
--- Internal Microsoft controls that prevent access to production systems unless it is authorized through **Just-in-Time (JIT)** privileged access management system, as described in this section.-- Enforcement of **Customer Lockbox** that puts customers in charge of approving insider access in support and troubleshooting scenarios, as described in this section. For most support scenarios, access to customer data is not required.-- **Data encryption** with option for customer-managed encryption keys ΓÇô encrypted data is accessible only by entities who are in possession of the key, as described in the previous section.-- **Customer monitoring** of external access to their provisioned Azure resources, which includes security alerts as described in the next section.-
-Moreover, all Azure and Azure Government employees in the United States are subject to Microsoft background checks. For more information, see [screening](./documentation-government-plan-security.md#screening).
-
-### Access control requirements
-
-Microsoft takes strong measures to protect customer data from inappropriate access or use by unauthorized persons. Microsoft engineers (including full-time employees and subprocessors/vendors) [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to customer data in the cloud. Instead, they are granted access, under management oversight, only when necessary. Using the [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m), access to customer data is carefully controlled, logged, and revoked when it is no longer needed. For example, access to customer data may be required to resolve customer-initiated troubleshooting requests. The access control requirements are [established by the following policy](../security/fundamentals/protection-customer-data.md):
--- No access to customer data, by default.-- No user or administrator accounts on customer virtual machines (VMs).-- Grant the least privilege that is required to complete task, audit, and log access requests.-
-Microsoft engineers can be granted access to customer data using temporary credentials via **Just-in-Time (JIT)** access. There must be an incident logged in the Azure Incident Management system that describes the reason for access, approval record, what data was accessed, etc. This approach ensures that there is appropriate oversight for all access to customer data and that all JIT actions (consent and access) are logged for audit. Evidence that procedures have been established for granting temporary access for Azure personnel to customer data and applications upon appropriate approval for customer support or incident handling purposes is available from the Azure [SOC 2 Type 2 attestation report](https://aka.ms/azuresoc2auditreport) produced by an independent third-party auditing firm.
-
-JIT access works with multi-factor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they do not have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
-
-### Customer Lockbox for Azure
-
-[Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) is a service that provides customers with the capability to control how a Microsoft engineer accesses their data. As part of the support workflow, a Microsoft engineer may require elevated access to customer data. Customer Lockbox puts the customer in charge of that decision by enabling the customer to approve / deny such elevated requests. Customer Lockbox is an extension of the JIT workflow and comes with full audit logging enabled. Customer Lockbox capability is not required for support cases that do not involve access to customer data. For most support scenarios, access to customer data is not needed and the workflow should not require Customer Lockbox. Microsoft engineers rely heavily on logs to maintain Azure services and provide customer support.
-
-Customer Lockbox is automatically available to all customers who have an Azure support plan with a minimum level of Developer. With an eligible support plan, no action is required by a customer to enable Customer Lockbox for [supported services and scenarios in general availability](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-general-availability). More Azure services are currently in [public preview for Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-preview) and customers can enable Customer Lockbox for preview services by signing up via an online form. A Microsoft engineer will initiate Customer Lockbox request if this action is needed to progress a customer-initiated support ticket. Customer Lockbox is available to customers from all Azure public regions.
-
-### Guest VM memory crash dumps
-
-On each Azure node, there is a Hypervisor that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as described in [Compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation). Each node also has one special Root VM, which runs the Host OS.
-
-When a Guest VM (also known as customer VM) crashes, customer data may be contained inside a memory dump file on the Guest VM. **By default, Microsoft engineers do not have access to Guest VMs and cannot review crash dumps on Guest VMs without customerΓÇÖs approval.** The same process involving explicit customer authorization is used to control access to Guest VM crash dumps should the customer request an investigation of their VM crash. As described previously, access is gated by the JIT privileged access management system and Customer Lockbox so that all actions are logged and audited. The primary forcing function for deleting the memory dumps from Guest VMs is the routine process of VM reimaging that typically occurs at least every two months.
-
-### Data deletion, retention, and destruction
-
-Customers are [always in control of their customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. They can access, extract, and delete their customer data stored in Azure at will. When a customer terminates their Azure subscription, Microsoft takes the necessary steps to ensure that the customer continues to own their customer data. A common customer concern upon data deletion or subscription termination is whether another customer or Azure administrator can access their deleted data. For more information on how data deletion, retention, and destruction are implemented in Azure, see our online documentation:
--- [Data deletion](./azure-secure-isolation-guidance.md#data-deletion)-- [Data retention](./azure-secure-isolation-guidance.md#data-retention)-- [Data destruction](./azure-secure-isolation-guidance.md#data-destruction)
+Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to customerΓÇÖs systems and data. For more information on how Microsoft restricts insider access to customer data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
## Customer monitoring of Azure resources
-Listed below are essential Azure services that customers can use to gain in-depth insight into their provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at their applications and data. For a complete list, see the Azure service directory sections for [Management + Governance](https://azure.microsoft.com/services/#management-tools), [Networking](https://azure.microsoft.com/services/#networking), and [Security](https://azure.microsoft.com/services/#security). Moreover, the [Azure Security Benchmark](../security/benchmarks/index.yml) provides security recommendations and implementation details to help customers improve the security posture with respect to Azure resources.
-
-**[Azure Security Center](../security-center/index.yml)** provides unified security management and advanced threat protection across hybrid cloud workloads. It is an essential service for customers to limit their exposure to threats, protect cloud resources, [respond to incidents](../security-center/security-center-alerts-overview.md), and improve their regulatory compliance posture.
-
-With Azure Security Center, customers can:
--- Monitor security across on-premises and cloud workloads.-- Apply advanced analytics and threat intelligence to detect attacks.-- Use access and application controls to block malicious activity.-- Find and fix vulnerabilities before they can be exploited.-- Simplify investigation when responding to threats.-- Apply policy to ensure compliance with security standards.-
-To assist customers with Azure Security Center usage, Microsoft has published extensive [online documentation](../security-center/index.yml) and numerous blog posts covering specific security topics:
--- [How Azure Security Center detects a Bitcoin mining attack](https://azure.microsoft.com/blog/how-azure-security-center-detects-a-bitcoin-mining-attack/)-- [How Azure Security Center detects DDoS attack using cyber threat intelligence](https://azure.microsoft.com/blog/how-azure-security-center-detects-ddos-attack-using-cyber-threat-intelligence/)-- [How Azure Security Center aids in detecting good applications being used maliciously](https://azure.microsoft.com/blog/how-azure-security-center-aids-in-detecting-good-applications-being-used-maliciously/)-- [How Azure Security Center unveils suspicious PowerShell attack](https://azure.microsoft.com/blog/how-azure-security-center-unveils-suspicious-powershell-attack/)-- [How Azure Security Center helps reveal a cyber attack](https://azure.microsoft.com/blog/how-azure-security-center-helps-reveal-a-cyberattack/)-- [How Azure Security Center helps analyze attacks using Investigation and Log Search](https://azure.microsoft.com/blog/how-azure-security-center-helps-analyze-attacks-using-investigation-and-log-search/)-- [Azure Security Center adds context alerts to aid threat investigation](https://azure.microsoft.com/blog/azure-security-center-adds-context-alerts-to-aid-threat-investigation/)-- [How Azure Security Center automates the detection of cyber attack](https://azure.microsoft.com/blog/how-azure-security-center-automates-the-detection-of-cyber-attack/)-- [Heuristic DNS detections in Azure Security Center](https://azure.microsoft.com/blog/heuristic-dns-detections-in-azure-security-center/)-- [Detect the latest ransomware threat (Bad Rabbit) with Azure Security Center](https://azure.microsoft.com/blog/detect-the-latest-ransomware-threat-aka-bad-rabbit-with-azure-security-center/)-- [Petya ransomware prevention & detection in Azure Security Center](https://azure.microsoft.com/blog/petya-ransomware-prevention-detection-in-azure-security-center/)-- [Detecting in-memory attacks with Sysmon and Azure Security Center](https://azure.microsoft.com/blog/detecting-in-memory-attacks-with-sysmon-and-azure-security-center/)-- [How Security Center and Log Analytics can be used for threat hunting](https://azure.microsoft.com/blog/ways-to-use-azure-security-center-log-analytics-for-threat-hunting/)-- [How Azure Security Center helps detect attacks against your Linux machines]https://azure.microsoft.com/blog/how-azure-security-center-helps-detect-attacks-against-your-linux-machines/)-- [Use Azure Security Center to detect when compromised Linux machines attack](https://azure.microsoft.com/blog/leverage-azure-security-center-to-detect-when-compromised-linux-machines-attack/)-
-**[Azure Monitor](../azure-monitor/overview.md)** helps customers maximize the availability and performance of applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from both cloud and on-premises environments. It helps customers understand how their applications are performing and proactively identifies issues affecting deployed applications and resources they depend on. Azure Monitor integrates the capabilities of Log Analytics and [Application Insights](../azure-monitor/app/app-insights-overview.md) that were previously branded as standalone services.
-
-Azure Monitor collects data from each of the following tiers:
--- **Application monitoring data:** Data about the performance and functionality of the code customers have written, regardless of its platform.-- **Guest OS monitoring data:** Data about the operating system on which customer application is running. The application could be running in Azure, another cloud, or on-premises. -- **Azure resource monitoring data:** Data about the operation of an Azure resource.-- **Azure subscription monitoring data:** Data about the operation and management of an Azure subscription and data about the health and operation of Azure itself. -- **Azure tenant monitoring data:** Data about the operation of tenant-level Azure services, such as Azure Active Directory.-
-With Azure Monitor, customers can get a 360-degree view of their applications, infrastructure, and network with advanced analytics, dashboards, and visualization maps. Azure Monitor provides intelligent insights and enables better decisions with AI. Customers can analyze, correlate, and monitor data from various sources using a powerful query language and built-in machine learning constructs. Moreover, Azure Monitor provides out-of-the-box integration with popular DevOps, IT Service Management (ITSM), and Security Information and Event Management (SIEM) tools.
-
-**[Azure Policy](../governance/policy/overview.md)** enables effective governance of Azure resources by creating, assigning, and managing policies. These policies enforce various rules over provisioned Azure resources to keep them compliant with specific customer corporate security and privacy standards. For example, one of the built-in policies for Allowed Locations can be used to restrict available locations for new resources to enforce customerΓÇÖs geo-compliance requirements. Azure Policy provides a comprehensive compliance view of all provisioned resources and enables cloud policy management and security at scale.
-
-**[Azure Firewall](../firewall/overview.md)** provides a managed, cloud-based network security service that protects customer Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability that integrates with Azure Monitor for logging and analytics.
-
-**[Network Watcher](../network-watcher/network-watcher-monitoring-overview.md)** allows customers to monitor, diagnose, and gain insights into their Azure virtual network performance and health. With Network Security Group flow logs, customers can gain deeper understanding of their network traffic patterns and collect data for compliance, auditing, and monitoring of their network security profile. Packet capture allows customers to capture traffic to and from their Virtual Machines to diagnose network anomalies and gather network statistics, including information on network intrusions.
-
-**[Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md)** provides extensive Distributed Denial of Service (DDoS) mitigation capability to help customers protect their Azure resources from attacks. Always-on traffic monitoring provides near real-time detection of a DDoS attack, with automatic mitigation of the attack as soon as it is detected. In combination with Web Application Firewall, DDoS Protection defends against a comprehensive set of network layer attacks, including SQL injection, cross-site scripting attacks, and session hijacks. Azure DDoS Protection is integrated with Azure Monitor for analytics and insight.
-
-**[Azure Sentinel](../sentinel/overview.md)** is a cloud-native SIEM platform that uses built-in AI to help customers quickly analyze large volumes of data across an enterprise. Azure Sentinel aggregates data from various sources, including users, applications, servers, and devices running on-premises or in any cloud, letting customers reason over millions of records in a few seconds. With Azure Sentinel, customers can:
--- **Collect** data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.-- **Detect** previously uncovered threats and minimize false positives using analytics and unparalleled threat intelligence from Microsoft.-- **Investigate** threats with AI and hunt suspicious activities at scale, tapping into decades of cybersecurity work at Microsoft.-- **Respond** to incidents rapidly with built-in orchestration and automation of common tasks.-
-**[Azure Advisor](../advisor/advisor-overview.md)** helps customers follow best practices to optimize their Azure deployments. It analyzes resource configurations and usage telemetry and then recommends solutions that can help customers improve the cost effectiveness, performance, high availability, and security of Azure resources.
-
-**[Azure Blueprints](../governance/blueprints/overview.md)** is a service that helps customers deploy and update cloud environments in a repeatable manner using composable artifacts such as Azure Resource Manager templates to provision resources, role-based access controls, and policies that adhere to an organizationΓÇÖs standards, patterns, and requirements. Customers can use pre-defined standard blueprints and customize these solutions to meet specific requirements, including data encryption, host and service configuration, network and connectivity configuration, identity, and other security aspects of deployed resources. The overarching goal of Azure Blueprints is to help automate compliance and cybersecurity risk management in cloud environments. For more information on Azure Blueprints, including production-ready blueprint solutions for ISO 27001, NIST SP 800-53, PCI DSS, HITRUST, and other standards, see the [Azure Blueprint samples](../governance/blueprints/samples/index.md).
+Azure provides essential services that customers can use to gain in-depth insight into their provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at their applications and data. For more information about these services, see [Customer monitoring of Azure resources](./documentation-government-plan-security.md#customer-monitoring-of-azure-resources).
## Conclusion
Customers should carefully assess how their use of Azure may implicate US export
## Next steps
-To help Azure customers navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper, which describes U.S. export controls (particularly as they apply to software and technical data), reviews potential sources of export control risks, and offers specific guidance to help customers assess their obligations under these controls.
+To help Azure customers navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper, which describes US export controls (particularly as they apply to software and technical data), reviews potential sources of export control risks, and offers specific guidance to help customers assess their obligations under these controls.
Learn more about:
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-wwps.md
+
+ Title: Azure for secure worldwide public sector cloud adoption
+description: Customer guidance for Azure public sector cloud adoption
++++ Last updated : 03/02/2021++
+# Azure for secure worldwide public sector cloud adoption
+
+Microsoft Azure is a multi-tenant cloud services platform that government agencies can use to deploy various solutions. A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications. Azure is available globally in more than 60 regions and can be used by government entities worldwide to meet rigorous data protection requirements across a broad spectrum of data classifications, including unclassified and classified data.
+
+This article addresses common data residency, security, and isolation concerns pertinent to worldwide public sector customers. It also explores technologies available in Azure to safeguard both unclassified and classified workloads in the public multi-tenant cloud in combination with Azure Stack Hub and Azure Stack Edge deployed on-premises and at the edge.
+
+## Executive summary
+
+Microsoft Azure provides strong customer commitments regarding data residency and transfer policies. Most Azure services enable the customer to specify the deployment region. For those services, Microsoft will not store customer data outside the customer specified geography. Customers can use extensive and robust data encryption options to help safeguard their data in Azure and control who can access it.
+
+Listed below are some of the options available to customers to safeguard their data in Azure:
+
+- Customers can choose to store their most sensitive customer content in services that store customer data at rest in Geo.
+- Customers can obtain further protection by encrypting data with their own key using Azure Key Vault.
+- While customers cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.
+- Azure is a 24x7 globally operated service; however, support and troubleshooting rarely require access to customer data.
+- Customers who want added control for support and troubleshooting can use Customer Lockbox for Azure to approve or deny access to their data.
+- Microsoft will notify customers of any breach of customer or personal data within 72 hours of incident declaration.
+- Customers can monitor potential threats and respond to incidents on their own using Azure Security Center.
+
+Using Azure data protection technologies and intelligent edge capabilities from the Azure Stack portfolio of products, customers can process confidential and secret data in secure isolated infrastructure within the public multi-tenant cloud or top secret data on premises and at the edge under the customerΓÇÖs full operational control.
+
+## Introduction
+
+Governments around the world are in the process of a digital transformation, actively investigating solutions and selecting architectures that will help them transition many of their workloads to the cloud. There are many drivers behind the digital transformation, including the need to engage citizens, empower employees, transform government services, and optimize government operations. Governments across the world are also looking to improve their cybersecurity posture to secure their assets and counter the evolving threat landscape.
+
+For governments and the public sector industry worldwide, Microsoft provides [Azure](https://azure.microsoft.com/) ΓÇô a public multi-tenant cloud services platform ΓÇô that government agencies can use to deploy various solutions. A multi-tenant cloud services platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](./azure-secure-isolation-guidance.md) to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
+
+A hyperscale public cloud provides resiliency in time of natural disaster and warfare. The cloud provides capacity for failover redundancy and empowers sovereign nations with flexibility regarding global resiliency planning. Hyperscale public cloud also offers a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, intelligent edge, and many more to help government customers increase efficiency and unlock insights into their operations and performance.
+
+Using AzureΓÇÖs public cloud capabilities, customers benefit from rapid feature growth, resiliency, and the cost-effective operation of the hyperscale cloud while still obtaining the levels of isolation, security, and confidence required to handle workloads across a broad spectrum of data classifications, including unclassified and classified data. Using Azure data protection technologies and intelligent edge capabilities from the [Azure Stack](https://azure.microsoft.com/overview/azure-stack/) portfolio of products, customers can process confidential and secret data in secure isolated infrastructure within the public multi-tenant cloud or top secret data on-premises and at the edge under the customerΓÇÖs full operational control.
+
+This article addresses common data residency, security, and isolation concerns pertinent to worldwide public sector customers. It also explores technologies available in Azure to help safeguard unclassified, confidential, and secret workloads in the public multi-tenant cloud in combination with Azure Stack products deployed on-premises and at the edge for fully disconnected scenarios involving top secret data. Given that unclassified workloads comprise most scenarios involved in worldwide public sector digital transformation, Microsoft recommends that customers start their cloud journey with unclassified workloads and then progress to classified workloads of increasing data sensitivity.
+
+## Data residency
+
+Established privacy regulations are silent on **data residency and data location**, and permit data transfers in accordance with approved mechanisms such as the EU Standard Contractual Clauses (also known as EU Model Clauses). Microsoft commits contractually in the Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA) that all potential transfers of customer data out of the EU, European Economic Area (EEA), and Switzerland shall be governed by the EU Model Clauses. Microsoft will abide by the requirements of the EEA and Swiss data protection laws regarding the collection, use, transfer, retention, and other processing of personal data from the EEA and Switzerland. All transfers of personal data are subject to appropriate safeguards and documentation requirements. However, many customers considering cloud adoption are seeking assurances about customer and personal data being kept within the geographic boundaries corresponding to customer operations or location of customerΓÇÖs end users.
+
+**Data sovereignty** implies data residency; however, it also introduces rules and requirements that define who has control over and access to customer data stored in the cloud. In many cases, data sovereignty mandates that customer data be subject to the laws and legal jurisdiction of the country in which data resides. These laws can have direct implications on data access even for platform maintenance or customer-initiated support requests. Customers can use Azure public multi-tenant cloud in combination with Azure Stack products for on-premises and edge solutions to meet their data sovereignty requirements, as described later in this article. These other products can be deployed to put customers solely in control of their data, including storage, processing, transmission, and remote access.
+
+Among several [data categories and definitions](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) that Microsoft established for cloud services, the following four categories are discussed in this article:
+
+- **Customer data** is all data that customers provide to Microsoft to manage on customerΓÇÖs behalf through customerΓÇÖs use of Microsoft online services.
+- **Customer content** is a subset of customer data and includes, for example, the content stored in a customerΓÇÖs Azure Storage account.
+- **Personal data** means any information associated with a specific natural person, for example, names and contact information of customerΓÇÖs end users. However, personal data could also include data that is not customer data, such as user ID that Azure can generate and assign to each customer administrator ΓÇô such personal data is considered pseudonymous because it cannot identify an individual on its own.
+- **Support and consulting data** mean all data provided by customer to Microsoft to obtain Support or Professional Services.
+
+The following sections address key cloud implications for data residency and the fundamental principles guiding MicrosoftΓÇÖs safeguarding of customer data at rest, in transit, and as part of customer-initiated support requests.
+
+### Data at rest
+
+Microsoft provides transparent insight into data location for all online services available to customers from ΓÇ£[Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location)ΓÇ¥ page ΓÇô expand *Cloud service data residency and transfer policies* section to reveal links for individual online services. **Customers who want to ensure their customer data is stored only in Geo should select from the many regional services that make this commitment.**
+
+#### *Regional vs. non-regional services*
+
+Microsoft Azure provides [strong customer commitments](https://azure.microsoft.com/global-infrastructure/data-residency/) regarding data residency and transfer policies:
+
+- **Data storage for regional
+- **Data storage for non-regional
+
+Customer data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies customer data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. Customers can typically choose to replicate their data within the same data center, across availability zones within the same region, or across geographically separated regions. Specifically, when creating a storage account, customers can select one of the following redundancy options:
+
+- [Locally redundant storage (LRS)](../storage/common/storage-redundancy.md#locally-redundant-storage)
+- [Zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage)
+- [Geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage)
+- [Geo-zone-redundant storage (GZRS)](../storage/common/storage-redundancy.md#geo-zone-redundant-storage)
+
+Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage provides LRS and ZRS redundancy options for replicating data in the primary region. For applications requiring high availability, customers can choose geo-replication to a secondary region that is hundreds of kilometers away from the primary region. Azure Storage offers GRS and GZRS options for copying data to a secondary region. More options are available to customers for configuring read access (RA) to the secondary region (RA-GRS and RA-GZRS), as explained in [Read access to data in the secondary region](../storage/common/storage-redundancy.md#read-access-to-data-in-the-secondary-region).
+
+Azure Storage redundancy options can have implications on data residency as Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS). For example, customers concerned about geo-replication across regions that span country boundaries, may want to choose LRS or ZRS to keep Azure Storage data at rest within the geographic boundaries of the country in which the primary region is located. Similarly, [geo replication for Azure SQL Database](../azure-sql/database/active-geo-replication-overview.md) can be obtained by configuring asynchronous replication of transactions to any region in the world, although it is recommended that paired regions be used for this purpose as well. If customers need to keep relational data inside the geographic boundaries of their country, they should not configure Azure SQL Database asynchronous replication to a region outside that country.
+
+As described on the [data location page](https://azure.microsoft.com/global-infrastructure/data-residency/), most Azure **regional** services honor the data at rest commitment to ensure that customer data remains within the geographic boundary where the corresponding service is deployed. A handful of exceptions to this rule are noted on the data location page. Customers should review these exceptions to determine if the type of data stored outside their chosen deployment Geo meets their needs.
+
+**Non-regional** Azure services do not enable customers to specify the region where the services will be deployed. Some non-regional services do not store customer data at all but merely provide global routing functions such as Azure Traffic Manager or Azure DNS. Other non-regional services are intended for data caching at edge locations around the globe, such as the Content Delivery Network ΓÇô such services are optional and customers should not use them for sensitive customer content they wish to keep in Geo. One non-regional service that warrants extra discussion is **Azure Active Directory**, which is discussed in the next section.
+
+#### *Customer data in Azure Active Directory*
+
+Azure Active Directory (Azure AD) is a non-regional service that may store identity data globally, except for Azure AD deployments in:
+
+- The United States, where identity data is stored solely in the United States.
+- Europe, where Azure AD keeps most of the identity data within European datacenters except as noted in [Identity data storage for European customers in Azure Active Directory](../active-directory/fundamentals/active-directory-data-storage-eu.md).
+- Australia and New Zealand, where identity data is stored in Australia except as noted in [Customer data storage for Australian and New Zealand customers in Azure Active Directory](../active-directory/fundamentals/active-directory-data-storage-australia-newzealand.md).
+
+Azure AD provides a [dashboard](https://go.microsoft.com/fwlink/?linkid=2092972) with transparent insight into data location for every Azure AD component service. Among other features, Azure AD is an identity management service that stores directory data for customerΓÇÖs Azure administrators, including user **personal data** categorized as **End User Identifiable Information (EUII)**, for example, names, email addresses, and so on. In Azure AD, customers can create User, Group, Device, Application, and other entities using various attribute types such as Integer, DateTime, Binary, String (limited to 256 characters), and so on. Azure AD is not intended to store customer content and it is not possible to store blobs, files, database records, and similar structures in Azure AD. Moreover, Azure AD is not intended to be an identity management service for customerΓÇÖs external end users ΓÇô [Azure AD B2C](../active-directory-b2c/overview.md) should be used for that purpose.
+
+Azure AD implements extensive **data protection features**, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper [Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).
+
+#### *Generating pseudonymous data for internal systems*
+
+Personal data is defined broadly. It includes not just customer data but also unique personal identifiers such as Probably Unique Identifier (PUID) and Globally Unique Identifier (GUID), the latter being often labeled as Universally Unique Identifier (UUID). These unique personal identifiers are *pseudonymous identifiers*. This type of information is generated automatically to track users who interact directly with Azure services, such as customerΓÇÖs administrators. For example, PUID is a random string generated programmatically via a combination of characters and digits to provide a high probability of uniqueness. Pseudonymous identifiers are stored in centralized internal Azure systems.
+
+Whereas EUII represents data that could be used on its own to identify a user (for example, user name, display name, user principal name, or even user-specific IP address), pseudonymous identifiers are considered pseudonymous because they cannot identify an individual on their own. Pseudonymous identifiers do not contain any information uploaded or created by the customer.
+
+### Data in transit
+
+**While customers cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.**
+
+Data in transit applies to the following scenarios involving data traveling between:
+
+- CustomerΓÇÖs end users and Azure service
+- CustomerΓÇÖs on-premises datacenter and Azure region
+- Microsoft datacenters as part of expected Azure service operation
+
+While data in transit between two points within the Geo will typically remain in Geo, it is not possible to guarantee this 100% of the time because of the way that networks automatically reroute traffic to avoid congestion or bypass other interruptions. That said, data in transit can be protected through encryption as detailed below and in *[Data encryption in transit](#data-encryption-in-transit)* section.
+
+#### *CustomerΓÇÖs end users connection to Azure service*
+
+Most customers will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data. Customers can increase security by enabling encryption in transit. For example, customers can use [Azure Application Gateway](../application-gateway/application-gateway-end-to-end-ssl-powershell.md) to configure end-to-end encryption of traffic. As described in *[Data encryption in transit](#data-encryption-in-transit)* section, Azure uses the Transport Layer Security (TLS) protocol to help protect data when it is traveling between customers and Azure services. However, Microsoft cannot control network traffic paths corresponding to customerΓÇÖs end-user interaction with Azure.
+
+#### *CustomerΓÇÖs datacenter connection to Azure region*
+
+[Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure virtual machines (VMs) to act as part of a customerΓÇÖs internal (on-premises) network. Customers have options to securely connect to a VNet from their on-premises infrastructure ΓÇô choose an [IPSec protected VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) (for example, point-to-site VPN or site-to-site VPN) or a private connection by using Azure [ExpressRoute](../expressroute/expressroute-introduction.md) with several [data encryption options](../expressroute/expressroute-about-encryption.md).
+
+- **IPSec protected VPN** uses an encrypted tunnel established across the public Internet, which means that customers need to rely on the local Internet service providers for any network-related assurances.
+- **ExpressRoute** allows customers to create private connections between Microsoft datacenters and their on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPSec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. For example, customers can connect to Microsoft in Amsterdam through ExpressRoute and have access to all Azure cloud services hosted in Northern and Western Europe. However, itΓÇÖs also possible to have access to the same Azure regions from ExpressRoute connections located elsewhere in the world. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet.
+
+#### *Traffic across Microsoft global network backbone*
+
+As described in *[Data at rest](#data-at-rest)* section, Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS), and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same Geo.
+
+Inter-region traffic is encrypted using [Media Access Control Security](https://1.ieee802.org/security/802-1ae/) (MACsec), which protects network traffic at the data link layer (Network Layer 2) and relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 160,000 km of lit fiber optic and undersea cable systems. However, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability. Therefore, Microsoft cannot guarantee that network traffic traversing between Azure regions will always be confined to the corresponding Geo. In networking infrastructure disruptions, Microsoft can reroute the encrypted network traffic across its private backbone to ensure service availability and best possible performance.
+
+### Data for customer support and troubleshooting
+
+**Azure is a 24x7 globally operated service; however, support and troubleshooting rarely requires access to customer data. Customers who want added control for support and troubleshooting can use Customer Lockbox for Azure to approve or deny access to their data.**
+
+Microsoft [Azure support](https://azure.microsoft.com/support/options/) is available in markets where Azure is offered. It is staffed globally to accommodate 24x7 access to support engineers via email and phone for technical support. Customers can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. As needed, frontline support engineers can escalate customer requests to Azure DevOps personnel responsible for Azure service development and operations. These Azure DevOps engineers are also staffed globally. The same production access controls and processes are imposed on all Microsoft engineers, which include support staff comprised of both Microsoft full-time employees and subprocessors/vendors.
+
+As explained in *[Data encryption at rest](#data-encryption-at-rest)* section, **customer data is encrypted at rest** by default when stored in Azure and customers can control their own encryption keys in Azure Key Vault. Moreover, access to customer data is not needed to resolve most customer support requests. Microsoft engineers rely heavily on logs to provide customer support. As described in *[Insider data access](#insider-data-access)* section, Azure has controls in place to restrict access to customer data for support and troubleshooting scenarios should that access be necessary. For example, **Just-in-Time (JIT)** access provisions restrict access to production systems to Microsoft engineers who are authorized to be in that role and were granted temporary access credentials. As part of the support workflow, **Customer Lockbox** puts customers in charge of approving or denying access to customer data by Microsoft engineers. When combined, these Azure technologies and processes (data encryption, JIT, and Customer Lockbox) provide appropriate risk mitigation to safeguard confidentiality and integrity of customer data.
+
+Government customers worldwide expect to be fully in control of protecting their data in the cloud. As described in the next section, Azure provides extensive options for data encryption through its entire lifecycle (at rest, in transit, and in use), including customer control of encryption keys.
+
+## Data encryption
+
+Azure has extensive support to safeguard customer data using [data encryption](../security/fundamentals/encryption-overview.md). **Customers who require extra security for their most sensitive customer data stored in Azure services can encrypt it using their own encryption keys they control in Azure Key Vault. While customers cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.** Azure supports the following data encryption models:
+
+- Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware.
+- Client-side encryption that enables customers to manage and store keys on-premises or in another secure location.
+
+Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
+
+### Encryption key management
+
+Proper protection and management of encryption keys is essential for data security. **[Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets.** The Key Vault service supports two resource types:
+
+- **Vault** supports software-protected and hardware security module (HSM)-protected secrets, keys, and certificates.
+- **Managed HSM** supports only HSM-protected cryptographic keys.
+
+Key Vault enables customers to store their encryption keys in hardware security modules (HSMs) that are FIPS 140-2 validated. With Azure Key Vault, customers can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. **Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM.
+
+**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer cryptographic keys.**
+
+For more information, see [Azure Key Vault](./azure-secure-isolation-guidance.md#azure-key-vault).
+
+### Data encryption in transit
+
+Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
+
+### Data encryption at rest
+
+Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
+
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under customer control in [Azure Key Vault](../key-vault/general/secure-your-key-vault.md), which is AzureΓÇÖs cloud-based external key management system. Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables customers to store TDE Protector in Key Vault and control key management tasks including key permissions, rotation, deletion, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). Customers can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing clients to encrypt data inside client applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+
+### Data encryption in use
+
+Microsoft enables customers to protect their data throughout its entire lifecycle: at rest, in transit, and in use. Azure confidential computing and Homomorphic encryption are two techniques that safeguard customer data while it is processed in the cloud.
+
+#### *Azure confidential computing*
+
+[Azure confidential computing](https://azure.microsoft.com/solutions/confidential-compute/) is a set of data security capabilities that offers encryption of data while in use. This approach means that data can be processed in the cloud with the assurance that it is always under customer control. Confidential computing ensures that when data is in the clear, which is needed for efficient data processing in memory, the data is protected inside a hardware-based [trusted execution environment](../confidential-computing/overview.md) (TEE, also known as an enclave), as depicted in Figure 1. TEE helps ensure that there is no way to view data or the operations from outside the enclave and that only the application designer has access to TEE data; access is denied to everyone else including Azure administrators. Moreover, TEE helps ensure that only authorized code is permitted to access data. If the code is altered or tampered with, the operations are denied, and the environment is disabled.
++
+**Figure 1.** Trusted execution environment protection
+
+Azure [DCsv2-series virtual machines](../virtual-machines/dcv2-series.md) have the latest generation of Intel Xeon processors with [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology, which provides a hardware-based TEE. Intel SGX isolates a portion of physical memory to create an enclave where select code and data are protected from viewing or modification. The protection offered by Intel SGX, when used appropriately by application developers, can prevent compromise due to attacks from privileged software and many hardware-based attacks. An application using Intel SGX needs to be [refactored into trusted and untrusted components](https://software.intel.com/sites/default/files/managed/c3/8b/intel-sgx-product-brief-2019.pdf). The untrusted part of the application sets up the enclave, which then allows the trusted part to run inside the enclave. No other code, irrespective of the privilege level, has access to the code executing within the enclave or the data associated with enclave code. Design best practices call for the trusted partition to contain just the minimum amount of content required to protect customerΓÇÖs secrets. For more information, see [Application development on Intel SGX](../confidential-computing/application-development.md).
+
+Based on customer feedback, Microsoft has started to invest in higher-level [scenarios for Azure confidential computing](../confidential-computing/use-cases-scenarios.md). Customers can review the scenario recommendations as a starting point for developing their own applications using confidential computing services and frameworks.
+
+#### *Homomorphic encryption*
+
+[Homomorphic encryption](https://www.microsoft.com/research/project/homomorphic-encryption/) refers to a special type of encryption technology that allows for computations to be performed on encrypted data, without requiring access to a key needed to decrypt the data. The results of the computation are encrypted and can be revealed only by the owner of the encryption key. In this manner, only the encrypted data are processed in the cloud and only the customer can reveal the results of the computation.
+
+To help customers adopt homomorphic encryption, [Microsoft SEAL](https://www.microsoft.com/research/project/microsoft-seal/) provides a set of encryption libraries that allow computations to be performed directly on encrypted data. This approach enables customers to build end-to-end encrypted data storage and compute services where the customer never needs to share their encryption keys with the cloud service. Microsoft SEAL aims to make homomorphic encryption easy to use and available to everyone. It provides a simple and convenient API and comes with several detailed examples demonstrating how the library can be used correctly and securely.
+
+Data encryption in the cloud is an important risk mitigation requirement expected by government customers worldwide. As described in this section, Azure helps customers protect their data through its entire lifecycle whether at rest, in transit, or even in use. Moreover, Azure offers comprehensive encryption key management to help customers control their keys in the cloud, including key permissions, rotation, deletion, and so on. End-to-end data encryption using advanced ciphers is fundamental to ensuring confidentiality and integrity of customer data in the cloud. However, customers also expect assurances regarding any potential customer data access by Microsoft engineers for service maintenance, customer support, or other scenarios. These controls are described in the next section.
+
+## Insider data access
+
+Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to customerΓÇÖs systems and data. Microsoft provides strong [customer commitments](https://www.microsoft.com/trust-center/privacy/data-access) regarding who can access customer data and on what terms. Access to customer data by Microsoft operations and support personnel is **denied by default**. Access to customer data is not needed to operate Azure. Moreover, for most support scenarios involving customer troubleshooting tickets, access to customer data is not needed.
+
+No default access rights and Just-in-Time (JIT) access provisions reduce greatly the risks associated with traditional on-premises administrator elevated access rights that typically persist throughout the duration of employment. Microsoft makes it considerably more difficult for malicious insiders to tamper with customer applications and data. The same access control restrictions and processes are imposed on all Microsoft engineers, including both full-time employees and subprocessors/vendors.
+
+For more information on how Microsoft restricts insider access to customer data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
+
+## Government requests for customer data
+
+Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Microsoft takes strong measures to help protect customer data from inappropriate access or use by unauthorized persons. These measures include restricting access by Microsoft personnel and subcontractors and carefully defining requirements for responding to government requests for customer data. Microsoft ensures that there are no back-door channels and no direct or unfettered government access to customer data. Microsoft imposes special requirements for government and law enforcement requests for customer data.
+
+As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft will not disclose customer data to law enforcement unless required by law. If law enforcement contacts Microsoft with a demand for customer data, Microsoft will attempt to redirect the law enforcement agency to request that data directly from the customer. If compelled to disclose customer data to law enforcement, Microsoft will promptly notify the customer and provide a copy of the demand unless legally prohibited from doing so.
+
+Government requests for customer data must comply with applicable laws.
+
+- A subpoena or its local equivalent is required to request non-content data.
+- A warrant, court order, or its local equivalent is required for content data.
+
+Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court.
+
+Our [Law Enforcement Request Report](https://www.microsoft.com/about/corporate-responsibility/lerr) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data.
+
+### CLOUD Act provisions
+
+The [CLOUD Act](https://www.congress.gov/bill/115th-congress/house-bill/4943) is a United States law that was enacted in March 2018. For more information, see MicrosoftΓÇÖs [blog post](https://blogs.microsoft.com/on-the-issues/2018/04/03/the-cloud-act-is-an-important-step-forward-but-now-more-steps-need-to-follow/) and the [follow-up blog post](https://blogs.microsoft.com/on-the-issues/2018/09/11/a-call-for-principle-based-international-agreements-to-govern-law-enforcement-access-to-data/) that describes MicrosoftΓÇÖs call for principle-based international agreements governing law enforcement access to data. Key points of interest to government customers procuring Azure services are captured below.
+
+- The CLOUD Act enables governments to negotiate new government-to-government agreements that will result in greater transparency and certainty for how information is disclosed to law enforcement agencies across international borders.
+- The CLOUD Act is not a mechanism for greater government surveillance; it is a mechanism toward ensuring that customer data is ultimately protected by the laws of each customerΓÇÖs home country while continuing to facilitate lawful access to evidence for legitimate criminal investigations. Law enforcement in the US still needs to obtain a warrant demonstrating probable cause of a crime from an independent court before seeking the contents of communications. The CLOUD Act requires similar protections for other countries seeking bilateral agreements.
+- While the CLOUD Act creates new rights under new international agreements, it also preserves the common law right of cloud service providers to go to court to challenge search warrants when there is a conflict of laws ΓÇô even without these new treaties in place.
+- Microsoft retains the legal right to object to a law enforcement order in the United States where the order clearly conflicts with the laws of the country where customer data is hosted. Microsoft will continue to carefully evaluate every law enforcement request and exercise its rights to protect customers where appropriate.
+- For legitimate enterprise customers, US law enforcement will, in most instances, now go directly to the customer rather than Microsoft for information requests.
+
+**Microsoft does not disclose extra data as a result of the CLOUD Act**. This law does not practically change any of the legal and privacy protections that previously applied to law enforcement requests for data ΓÇô and those protections continue to apply. Microsoft adheres to the same principles and customer commitments related to government demands for user data.
+
+Most government customers have requirements in place for handling security incidents, including data breach notifications. Microsoft has a mature security and privacy incident management process in place that is described in the next section.
+
+## Breach notifications
+
+**Microsoft will notify customers of any breach of customer or personal data within 72 hours of incident declaration. Customers can monitor potential threats and respond to incidents on their own using Azure Security Center.**
+
+Microsoft is responsible for monitoring and remediating security and availability incidents affecting the Azure platform and notifying customers of any security breaches involving customer or personal data. Microsoft Azure has a mature security and privacy incident management process that is used for this purpose. Customers are responsible for monitoring their own resources provisioned in Azure, as described in the next section.
+
+### Shared responsibility
+
+The NIST [SP 800-145](https://csrc.nist.gov/publications/detail/sp/800-145/final) standard defines the following cloud service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The [shared responsibility](../security/fundamentals/shared-responsibility.md) model for cloud computing is depicted in Figure 2. With on-premises deployment in their own datacenter, customers assume the responsibility for all layers in the stack. As workloads get migrated to the cloud, Microsoft assumes progressively more responsibility depending on the cloud service model. For example, with the IaaS model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and customers are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest Virtual Machines.
+
+**Figure 2.** Shared responsibility model in cloud computing
+
+In line with the shared responsibility model, Microsoft does not inspect, approve, or monitor individual customer applications deployed on Azure. For example, Microsoft does not know what firewall ports need to be open for customerΓÇÖs application to function correctly, what the back-end database schema looks like, what constitutes normal network traffic for the application, and so on. Microsoft has extensive monitoring infrastructure in place for the cloud platform; however, customers are responsible for provisioning and monitoring their own resources in Azure. Customers can deploy a range of Azure services to monitor and safeguard their applications and data, as described in the next section.
+
+### Essential Azure services for extra protection
+
+Azure provides essential services that customers can use to gain in-depth insight into their provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at their applications and data. The [Azure Security Benchmark](../security/benchmarks/index.yml) provides security recommendations and implementation details to help customers improve the security posture with respect to Azure resources.
+
+For more information about essential Azure services for extra protection, see [Customer monitoring of Azure resources](./documentation-government-plan-security.md#customer-monitoring-of-azure-resources).
+
+### Breach notification process
+
+Security incident response, including breach notification, is a subset of MicrosoftΓÇÖs overall incident management plan for Azure. All Microsoft employees are trained to identify and escalate potential security incidents. A dedicated team of security engineers within the Microsoft Security Response Center (MSRC) is responsible for always managing the security incident response for Azure. Microsoft follows a five-step incident response process when managing both security and availability incidents for Azure services. The process includes the following stages:
+
+1. Detect
+2. Assess
+3. Diagnose
+4. Stabilize and recover
+5. Close
+
+The goal of this process is to restore normal service operations and security as quickly as possible after an issue is detected, and an investigation started. Moreover, Microsoft enables customers to investigate, manage, and respond to security incidents in their Azure subscriptions. For more information, see [Incident management implementation guidance: Azure and Office 365](https://servicetrust.microsoft.com/ViewPage/TrustDocumentsV3?command=Download&downloadType=Document&downloadId=a8a7cb87-9710-4d09-8748-0835b6754e95&tab=7f51cb60-3d6c-11e9-b2af-7bb9f5d2d913&docTab=7f51cb60-3d6c-11e9-b2af-7bb9f5d2d913_FAQ_and_White_Papers).
+
+If during the investigation of a security or privacy event, Microsoft becomes aware that customer or personal data has been exposed or accessed by an unauthorized party, the security incident manager is required to trigger the incident notification subprocess in consultation with Microsoft legal affairs division. This subprocess is designed to fulfill incident notification requirements stipulated in Azure customer contracts (see *Security Incident Notification* in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA)). Customer notification and external reporting obligations (if any) are triggered by a security incident being declared. The customer notification subprocess begins in parallel with security incident investigation and mitigation phases to help minimize any impact resulting from the security incident.
+
+Microsoft will notify customers, Data Protection Authorities, and data subjects (each as applicable) of any breach of customer or personal data within 72 hours of incident declaration. **The notification process upon a declared security or privacy incident will occur as expeditiously as possible while still considering the security risks of proceeding quickly**. In practice, this approach means that most notifications will take place well before the 72-hr deadline to which Microsoft commits contractually. Notification of a security or privacy incident will be delivered to one or more of customerΓÇÖs administrators by any means Microsoft selects, including via email. Customers should [provide security contact details](../security-center/security-center-provide-security-contact-details.md) for their Azure subscription ΓÇô this information will be used by Microsoft to contact the customer if the MSRC discovers that customer data has been exposed or accessed by an unlawful or unauthorized party. To ensure that notification can be delivered successfully, it is the customerΓÇÖs responsibility to maintain correct administrative contact information for each applicable subscription.
+
+Most Azure security and privacy investigations do not result in declared security incidents. Most external threats do not lead to breaches of customer or personal data because of extensive platform security measures that Microsoft has in place. Microsoft has deployed extensive monitoring and diagnostics infrastructure throughout Azure that relies on big-data analytics and machine learning to get insight into the platform health, including real-time threat intelligence. While Microsoft takes all platform attacks seriously, it would be impractical to notify customers of potential attacks at the platform level.
+
+Aside from controls implemented by Microsoft to safeguard customer data, government customers deployed on Azure derive considerable benefits from security research that Microsoft conducts to protect the cloud platform. Microsoft global threat intelligence is one of the largest in the industry, and it is derived from one of the most diverse sets of threat telemetry sources. It is both the volume and diversity of threat telemetry that makes Microsoft machine learning algorithms applied to that telemetry so powerful. All Azure customers benefit directly from these investments as described in the next section.
+
+## Threat detection and prevention
+
+The Microsoft [Graph Security API](https://www.microsoft.com/security/business/graph-security-api) uses advanced analytics to synthesize massive amounts of threat intelligence and security signals obtained across Microsoft products, services, and partners to combat cyberthreats. Millions of unique threat indicators across the most diverse set of sources are generated every day by Microsoft and its partners and shared across Microsoft products and services (Figure 3). Across its portfolio of global services, each month Microsoft scans more than 400 billion email messages for phishing and malware, processes 450 billion authentications, executes more than 18 billion page scans, and scans more than 1.2 billion devices for threats. Importantly, this data always goes through strict privacy and compliance boundaries before being used for security analysis.
+
+**Figure 3.** Microsoft global threat intelligence is one of the largest in the industry
+
+The Microsoft Graph Security API provides an unparalleled view into the evolving threat landscape and enables rapid innovation to detect and respond to threats. Machine learning models and artificial intelligence reason over vast security signals to identify vulnerabilities and threats. The Microsoft Graph Security API provides a common gateway to [share and act on security insights](/graph/security-concept-overview) across the Microsoft platform and partner solutions. Azure customers benefit directly from the Microsoft Graph Security API as Microsoft makes the vast threat telemetry and advanced analytics [available in Microsoft online services](/graph/api/resources/security-api-overview), including Azure Security Center. These services can help customers address their own security requirements in the cloud.
+
+Microsoft has implemented extensive protections for the Azure cloud platform and made available a wide range of Azure services to help customers monitor and protect their provisioned cloud resources from attacks. Nonetheless, for certain types of workloads and data classifications, government customers expect to have full operational control over their environment and even operate in a fully disconnected mode. The Azure Stack portfolio of products enables customers to provision private and hybrid cloud deployment models that can accommodate highly sensitive data, as described in the next section.
+
+## Private and hybrid cloud with Azure Stack
+
+[Azure Stack](https://azure.microsoft.com/overview/azure-stack/) portfolio is an extension of Azure that enables customers to build and run hybrid applications across on-premises, edge locations, and cloud. As shown in Figure 4, Azure Stack includes Azure Stack Hyperconverged Infrastructure (HCI), Azure Stack Hub (previously Azure Stack), and Azure Stack Edge (previously Azure Data Box Edge). The last two components (Azure Stack Hub and Azure Stack Edge) are discussed in this section. For more information, see [Differences between global Azure, Azure Stack Hub, and Azure Stack HCI](/azure-stack/operator/compare-azure-azure-stack).
+
+**Figure 4.** Azure Stack portfolio
+
+Azure Stack Hub and Azure Stack Edge represent key enabling technologies that allow customers to process highly sensitive data using a private or hybrid cloud and pursue digital transformation using Microsoft [intelligent cloud and intelligent edge](https://azure.microsoft.com/overview/future-of-cloud/) approach. For many government customers, enforcing data sovereignty, addressing custom compliance requirements, and applying maximum available protection to highly sensitive data are the primary driving factors behind these efforts.
+
+### Azure Stack Hub
+
+[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that customers can purchase from Microsoft hardware partners, deploy in their own data center, and then operate entirely on their own or with the help from a managed service provider. With Azure Stack Hub, the customer is always fully in control of access to their data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling customers to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. Customers can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes they use in Azure. Azure Stack Hub is not dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity.
+
+In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. Customers can run the next generation of AI-enabled hybrid applications where their data lives. For example, government agencies can rely on Azure Stack Hub to bring a trained AI model to the edge and integrate it with their applications for low-latency intelligence, with no tool or process changes for local applications.
+
+Azure and Azure Stack Hub can help government customers unlock new hybrid use cases for customer-facing and internal line-of-business application, including edge and disconnected scenarios, cloud applications intended to meet data sovereignty and compliance requirements, and cloud applications deployed on-premises in customer data center. These use cases may include mobile scenarios or fixed deployments within highly secure data center facilities. Figure 5 shows Azure Stack Hub capabilities and key usage scenarios.
+
+**Figure 5.** Azure Stack Hub capabilities
+
+Azure Stack Hub brings the following [value proposition for key scenarios](/azure-stack/operator/azure-stack-overview) shown in Figure 5:
+
+- **Edge and disconnected solutions:** Address latency and connectivity requirements by processing data locally in Azure Stack Hub and then aggregating in Azure for further analytics, with common application logic across both, connected or disconnected. Aircraft, ship, or truck-delivered, Azure Stack Hub meets the tough demands of exploration, construction, agriculture, oil and gas, manufacturing, disaster response, government, and military efforts in the most extreme conditions and remote locations. Government customers can use Azure Stack Hub architecture for [edge and disconnected solutions](/azure/architecture/solution-ideas/articles/ai-at-the-edge-disconnected), for example, bring the next generation of AI-enabled hybrid applications to the edge where the data lives and integrate it with existing applications for low-latency intelligence.
+- **Cloud applications to meet data sovereignty:** Deploy a single application differently depending on the country or region. Customers can develop and deploy applications in Azure, with full flexibility to deploy on-premises with Azure Stack Hub based on the need to meet data sovereignty or custom compliance requirements. Customers can use Azure Stack Hub architecture for [data sovereignty](/azure/architecture/solution-ideas/articles/data-sovereignty-and-gravity), for example, transmit data from Azure VNet to Azure Stack Hub VNet over private connection and ultimately store data in SQL Server database running in a VM on Azure Stack Hub. Government customers can use Azure Stack Hub to accommodate even more restrictive requirements such as the need to deploy solutions in a disconnected environment managed by security-cleared, in-country personnel. These disconnected environments may not be permitted to connect to the Internet for any purpose because of the security classification they operate at.
+- **Cloud application model on-premises:** Use Azure Stack Hub to update and extend legacy applications and make them cloud ready. With App Service on Azure Stack Hub, customers can create a web front end to consume modern APIs with modern clients while taking advantage of consistent programming models and skills. Customers can use Azure Stack Hub architecture for [legacy system modernization](/azure/architecture/solution-ideas/articles/unlock-legacy-data), for example, apply a consistent DevOps process, Azure Web Apps, containers, serverless computing, and microservices architectures to modernize legacy applications while integrating and preserving legacy data in mainframe and core line-of-business systems.
+
+Azure Stack Hub requires Azure Active Directory (Azure AD) or Active Directory Federation Services, backed by Active Directory as an [identity provider](/azure-stack/operator/azure-stack-identity-overview). Customers can use [role-based access control](/azure-stack/user/azure-stack-manage-permissions) (RBAC) to grant system access to authorized users, groups, and services by assigning them roles at a subscription, resource group, or individual resource level. Each role defines the access level a user, group, or service has over Azure Stack Hub resources.
+
+Azure Stack Hub protects customer data at the storage subsystem level using [encryption at rest](/azure-stack/operator/azure-stack-security-bitlocker). By default, Azure Stack Hub's storage subsystem is encrypted using BitLocker with 128-bit AES encryption. BitLocker keys are persisted in an internal secret store. At deployment time, it is also possible to configure BitLocker to use 256-bit AES encryption. Customers can store and manage their secrets including cryptographic keys using [Key Vault in Azure Stack Hub](/azure-stack/user/azure-stack-key-vault-intro).
+
+### Azure Stack Edge
+
+[Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Azure Data Box Edge) is an AI-enabled edge computing device with network data transfer capabilities. It enables customers to pre-process data at the edge and move data to Azure efficiently. Azure Stack Edge uses advanced Field-Programmable Gate Array (FPGA) hardware natively integrated into the appliance to run machine learning algorithms at the edge efficiently. The size and portability allow customers to run Azure Stack Edge as close to users, apps, and data as needed. Figure 6 shows Azure Stack Edge capabilities and key use cases.
+
+**Figure 6.** Azure Stack Edge capabilities
+
+Azure Stack Edge brings the following [value proposition for key use cases](../databox-online/azure-stack-edge-overview.md#use-cases) shown in Figure 6:
+
+- **Preprocess data:** Analyze data from on-premises or IoT devices to quickly obtain results while staying close to where data is generated. Azure Stack Edge transfers the full data set (or just the necessary subset of data when bandwidth is an issue) to the cloud to perform more advanced processing or deeper analytics. Preprocessing can be used to aggregate data, modify data (for example, remove personally identifiable information or other sensitive data), transfer data needed for deeper analytics in the cloud, and analyze and react to IoT events.
+- **Inference Azure Machine Learning:** Inference is a part of deep learning that takes place after model training, such as the prediction stage resulting from applying learned capability to new data. For example, itΓÇÖs the part that recognizes a vehicle in a target image after the model has been trained by processing many tagged vehicle images, often augmented by computer synthesized images (also known as synthetics). With Azure Stack Edge, customers can run Machine Learning (ML) models to get results quickly and act on them before the data is sent to the cloud. The necessary subset of data (in case of bandwidth constraints) or the full data set is transferred to the cloud to continue to retrain and improve customerΓÇÖs ML models.
+- **Transfer data over network to Azure:** Use Azure Stack Edge to transfer data to Azure to enable further compute and analytics or for archival purposes.
+
+Being able to gather, discern, and distribute mission data is essential for making critical decisions. Tools that help process and transfer data directly at the edge make this capability possible. For example, Azure Stack Edge, with its light footprint and built-in hardware acceleration for ML inferencing, is useful to further the intelligence of forward-operating units or similar mission needs with AI solutions designed for the tactical edge. Data transfer from the field, which is traditionally complex and slow, is made seamless with the [Azure Data Box](https://azure.microsoft.com/services/databox/) family of products.
+
+These products unite the best of edge and cloud computing to unlock never-before-possible capabilities like synthetic mapping and ML model inferencing. From submarines to aircraft to remote bases, Azure Stack Hub and Azure Stack Edge allow customers to harness the power of cloud at the edge.
+
+Using Azure in combination with Azure Stack Hub and Azure Stack Edge, government customers can process confidential and sensitive data in a secure isolated infrastructure within the Azure public multi-tenant cloud or highly sensitive data at the edge under the customerΓÇÖs full operational control. The next section describes a conceptual architecture for classified workloads.
+
+## Conceptual architecture
+
+Figure 7 shows a conceptual architecture using products and services that support various data classifications. Azure public multi-tenant cloud is the underlying cloud platform that makes this solution possible. Customers can augment Azure with on-premises and edge products such as Azure Stack Hub and Azure Stack Edge to accommodate critical workloads over which customers seek increased or exclusive operational control. For example, Azure Stack Hub is intended for on-premises deployment in a customer-owned data center where the customer has full control over service connectivity. Moreover, Azure Stack Hub can be deployed to address tactical edge deployments for limited or no connectivity, including fully mobile scenarios.
+
+**Figure 7.** Conceptual architecture for classified workloads
+
+For classified workloads, customers can provision key enabling Azure services to secure target workloads while mitigating identified risks. Azure, in combination with [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/), can accommodate private and hybrid cloud deployment models, making them suitable for many government workloads involving both unclassified and classified data. The following data classification taxonomy is used in this article:
+
+- Confidential
+- Secret
+- Top secret
+
+Similar data classification schemes exist in many countries.
+
+For top secret data, customers can deploy Azure Stack Hub, which can operate fully disconnected from Azure and the Internet.
+[Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. Figure 8 depicts key enabling services that customers can provision to accommodate various workloads on Azure.
+
+**Figure 8.** Azure support for various data classifications
+
+### Confidential data
+
+Listed below are key enabling technologies and services that customers may find helpful when deploying confidential data and workloads on Azure:
+
+- All recommended technologies used for Unclassified data, especially services such as [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet), [Azure Security Center](../security-center/index.yml), and [Azure Monitor](../azure-monitor/index.yml).
+- Public IP addresses are disabled allowing only traffic through private connections, including [ExpressRoute](../expressroute/index.yml) and [Virtual Private Network](../vpn-gateway/index.yml) (VPN) gateway.
+- Data encryption is recommended with customer-managed keys (CMK) in [Azure Key Vault](../key-vault/index.yml) backed by multi-tenant hardware security modules (HSMs) that have FIPS 140-2 Level 2 validation.
+- Only services that support [VNet integration](../virtual-network/virtual-network-for-azure-services.md) options are enabled. Azure VNet enables customers to place Azure resources in a non-internet routable network, which can then be connected to customerΓÇÖs on-premises network using VPN technologies. VNet integration gives web apps access to resources in the virtual network.
+- Customers can use [Azure Private Link](../private-link/index.yml) to access Azure PaaS services over a private endpoint in their VNet, ensuring that traffic between their VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet.
+- [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure enables customers to approve/deny elevated access requests for customer data in support scenarios. ItΓÇÖs an extension of the Just-in-Time (JIT) workflow that comes with full audit logging enabled.
+
+Using Azure public multi-tenant cloud capabilities, customers can achieve the level of [isolation and security](./azure-secure-isolation-guidance.md) required to store confidential data. Customers should use Azure Security Center and Azure Monitor to gain visibility into their Azure environments including the security posture of their subscriptions.
+
+### Secret data
+
+Listed below are key enabling technologies and services that customers may find helpful when deploying secret data and workloads on Azure:
+
+- All recommended technologies used for confidential data.
+- Use Azure Key Vault [Managed HSM](../key-vault/managed-hsm/overview.md), which provides a fully managed, highly available, single-tenant HSM as a service that uses FIPS 140-2 Level 3 validated HSMs. Each Managed HSM instance is bound to a separate security domain controlled by the customer and isolated cryptographically from instances belonging to other customers.
+- [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. Customers can provision dedicated hosts within a region, availability zone, and fault domain. They can then place VMs directly into provisioned hosts using whatever configuration best meets their needs. Dedicated Host provides hardware isolation at the physical server level, enabling customers to place their Azure VMs on an isolated and dedicated physical server that runs only their organizationΓÇÖs workloads to meet corporate compliance requirements.
+- Accelerated FPGA networking based on [Azure SmartNICs](https://www.microsoft.com/research/publication/azure-accelerated-networking-smartnics-public-cloud/) enables customers to offload host networking to dedicated hardware, enabling tunneling for VNets, security, and load balancing. Offloading network traffic to a dedicated chip guards against side-channel attacks on the main CPU.
+- [Azure confidential computing](../confidential-computing/index.yml) offers encryption of data while in use, ensuring that data is always under customer control. Data is protected inside a hardware-based trusted execution environment (TEE, also known as enclave) and there is no way to view data or operations from outside the enclave.
+- [Just-in-time (JIT) virtual machine (VM) access](../security-center/security-center-just-in-time.md) can be used to lock down inbound traffic to Azure VMs by creating network security group (NSG) rules. Customers select ports on the VM to which inbound traffic will be locked down and when a user requests access to a VM, Azure Security Center checks that the user has proper role-based access control (RBAC) permissions.
+
+To accommodate secret data in the Azure public multi-tenant cloud, customers can deploy extra technologies and services on top of those technologies used for confidential data and limit provisioned services to those services that provide sufficient isolation. These services offer various isolation options at run time. They also support data encryption at rest using customer-managed keys in single-tenant HSMs controlled by the customer and isolated cryptographically from HSM instances belonging to other customers.
+
+### Top secret data
+
+Listed below are key enabling products that customers may find helpful when deploying top secret data and workloads on Azure:
+
+- All recommended technologies used for secret data.
+- [Azure Stack Hub](/azure-stack/operator/azure-stack-overview) (formerly Azure Stack) enables customers to run workloads using the same architecture and APIs as in Azure while having a physically isolated network for their highest classification data.
+- [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md) (formerly Azure Data Box Edge) allows the storage and processing of highest classification data but also enables customers to upload resulting information or models directly to Azure. This approach creates a path for information sharing between domains that makes it easier and more secure.
+- In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
+- User-provided hardware security modules (HSMs) allow customers to store their encryption keys and other secrets in HSMs deployed on-premises and controlled solely by customers.
+
+Accommodating top secret data will likely require a disconnected environment, which is what Azure Stack Hub provides. Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. Even though ΓÇ£air-gappedΓÇ¥ networks do not necessarily increase security, many governments may be reluctant to store data with this classification in an Internet connected environment.
+
+Azure offers an unmatched variety of public, private, and hybrid cloud deployment models to address each customerΓÇÖs concerns regarding the control of their data. The following section covers select use cases that might be of interest to worldwide government customers.
+
+## Select workloads and use cases
+
+This section provides an overview of select use cases that showcase Azure capabilities for workloads that might be of interest to worldwide governments. In terms of capabilities, Azure is presented via a combination of public multi-tenant cloud and on-premises + edge capabilities provided by [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/).
+
+### Processing highly sensitive or regulated data on Azure Stack Hub
+
+Microsoft provides Azure Stack Hub as an on-premises, cloud-consistent experience for customers who do not have the ability to directly connect to the Internet, or where certain workload types are required to be hosted in-country due to law, compliance, or sentiment. Azure Stack Hub offers IaaS and PaaS services and shares the same APIs as the public Azure cloud. Azure Stack Hub is available in scale units of 4, 8, and 16 servers in a single-server rack, and 4 servers in a military-specification, ruggedized set of transit cases, or multiple racks in a modular data center configuration.
+
+Azure Stack Hub is a solution for customers who operate in scenarios where:
+
+- Microsoft does not have an in-country cloud presence and therefore cannot meet data sovereignty requirements.
+- For compliance reasons, the customer cannot connect their network to the public Internet.
+- For geo-political or security reasons, Microsoft cannot offer connectivity to other Microsoft clouds.
+- For geo-political or security reasons, the host organization may require cloud management by non-Microsoft entities, or in-country by security-cleared personnel.
+- Cloud management would pose significant risk to the physical well-being of Microsoft resources operating the environment.
+
+For most of these customers, Microsoft and its partners offer a customer-managed, Azure Stack Hub-based private cloud appliance on field-deployable hardware from [major vendors](https://azure.microsoft.com/products/azure-stack/hub/#partners) such as Avanade, Cisco, Dell EMC, Hewlett Packard Enterprise, and Lenovo. Azure Stack Hub is manufactured, configured, and deployed by the hardware vendor, and can be ruggedized and security-hardened to meet a broad range of environmental and compliance standards, including the ability to withstand transport by aircraft, ship, or truck, and deployment into colocation, mobile, or modular data centers. Azure Stack Hub can be used in exploration, construction, agriculture, oil and gas, manufacturing, disaster response, government, and military efforts in hospitable or the most extreme conditions and remote locations. Azure Stack Hub allows customers the full autonomy to monitor, manage, and provision their own private cloud resources while meeting their connectivity, compliance, and ruggedization requirements.
+
+### Machine learning model training
+
+[Artificial intelligence](/learn/modules/azure-artificial-intelligence/1-introduction-to-azure-artificial-intelligence) (AI) holds tremendous potential for governments. [Machine learning](/learn/modules/azure-artificial-intelligence/3-machine-learning) (ML) is a data science technique that allows computers to learn to use existing data, without being explicitly programmed, to forecast future behaviors, outcomes, and trends. Moreover, [ML technologies](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning) can discover patterns, anomalies, and predictions that can help governments in their missions. As technical barriers continue to fall, decision-makers face the opportunity to develop and explore transformative AI applications. There are five main vectors that can make it easier, faster, and cheaper to adopt ML:
+
+- Unsupervised learning
+- Reducing need for training data
+- Accelerated learning
+- Transparency of outcome
+- Deploying closer to where data lives
+
+In the following sections, we expand on areas that can help government agencies with some of the above vectors.
+
+### IoT analytics
+
+In recent years, we have been witnessing massive proliferation of Internet of Things (IoT) devices and sensors. In almost all cases, these sensors gather signals and data from the environments and conditions theyΓÇÖre designed for. The spectrum of capabilities for IoT sensors expands from measuring the level of moisture in soil all the way to gathering intelligence at 5,000-meters altitude. The high number of use cases imposes the necessity of applying data-analysis tools and procedures to realize value from the huge volumes of gathered data by IoT devices.
+
+Governments are increasingly employing IoT devices for their missions, which could include maintenance predictions, borders monitoring, weather stations, smart meters, and field operations. In many cases, the data is often analyzed and inferred from where itΓÇÖs gathered. The main challenges of IoT analytics are: (1) large amount of data from independent sources, (2) analytics at the edge and often in disconnected scenarios, and (3) data and analysis aggregation.
+
+With innovative solutions such as [IoT Hub](https://azure.microsoft.com/services/iot-hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/), Azure services are well positioned to help governments with these challenges.
+
+### Precision Agriculture with Farm Beats
+
+Agriculture plays a vital role in most economies worldwide. In the US, over 70% of the rural households depend on agriculture as it contributes about 17% to the total GDP and provides employment to over 60% of the population. In project [Farm Beats](https://www.microsoft.com/research/project/farmbeats-iot-agriculture/), we gather numerous data from farms that we couldnΓÇÖt get before, and then by applying AI and ML algorithms we are able to turn this data into actionable insights for farmers. We call this technique data-driven farming. What we mean by data-driven farming is the ability to map every farm and overlay it with data. For example, what is the soil moisture level 6 inches below soil, what is the soil temperature 6 inches below soil, etc. These maps can then enable techniques, such as Precision Agriculture, which has been shown to improve yield, reduce costs, and benefit the environment. Despite the fact the Precision Agriculture as a technique was proposed more than 30 years ago, it hasnΓÇÖt taken off. The biggest reason is the inability to capture numerous data from farms to accurately represent the conditions in the farm. Our goal as part of the Farm Beats project is to be able to accurately construct precision maps at a fraction of the cost.
+
+### Unleashing the power of analytics with synthetic data
+
+Synthetic data is data that is artificially created rather than being generated by actual events. It is often created with the help of computer algorithms and it is used for a wide range of activities, including usage as test data for new products and tools, as well as for ML models validation and improvements. Synthetic data can meet specific needs or conditions that are not available in existing real data. For governments, the nature of synthetic data removes many barriers and helps data scientists with privacy concerns, accelerated learning, and data volume reduction needed for the same outcome. The main benefits of synthetic data are:
+
+- **Overcoming restrictions:** Real data may have usage constraints due to privacy rules or other regulations. Synthetic data can replicate all important statistical properties of real data without exposing real data.
+- **Scarcity:** Providing data where real data does not exist for a given event.
+- **Precision:** Synthetic data is perfectly labeled.
+- **Quality:** The quality of synthetic data can be precisely measured to fit the mission conditions.
+
+Synthetic data can exist in several forms, including text, audio, video, and hybrid.
+
+### Knowledge mining
+
+The exponential growth of unstructured data gathering in recent years has created many analytical problems for government agencies. This problem intensifies when data sets come from diverse sources such as text, audio, video, imaging, etc. [Knowledge mining](/learn/modules/azure-artificial-intelligence/2-knowledge-mining) is the process of discovering useful knowledge from a collection of diverse data sources. This widely used data mining technique is a process that includes data preparation and selection, data cleansing, incorporation of prior knowledge on data sets, and interpretation of accurate solutions from the observed results. This process has proven to be useful for large volumes of data in different government agencies.
+
+For instance, captured data from the field often includes documents, pamphlets, letters, spreadsheets, propaganda, videos, and audio files across many disparate structured and unstructured formats. Buried within the data are [actionable insights](https://www.youtube.com/watch?v=JFdF-Z7ypQo) that can enhance effective and timely response to crisis and drive decisions. The objective of knowledge mining is to enable decisions that are better, faster, and more humane by implementing proven commercial algorithm-based technologies.
+
+### Scenarios for confidential computing
+
+Security is a key driver accelerating the adoption of cloud computing, but itΓÇÖs also a major concern when customers are moving sensitive IP and data to the cloud.
+
+Microsoft Azure provides broad capabilities to secure data at rest and in transit, but sometimes the requirement is also to protect data from threats as itΓÇÖs being processed. Microsoft [Azure confidential computing](../confidential-computing/index.yml) is designed to address this scenario by performing computations in a hardware-based trusted execution environment (TEE, also known as enclave) based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology. The hardware provides a protected container by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so code and data are protected against viewing and modification from outside of TEE.
+
+TEEs can directly address scenarios involving data protection while in use. For example, consider the scenario where data coming from a public or unclassified source needs to be matched with data from a highly sensitive source. Azure confidential computing can enable that matching to occur in the public cloud while protecting the highly sensitive data from disclosure. This circumstance is common in highly sensitive national security and law enforcement scenarios.
+
+A second scenario involves data coming from multiple sources that needs to be analyzed together, even though none of the sources have the authority to see the data. Each individual provider encrypts the data they provide and only within the TEE is that data decrypted. As such, no external party and even none of the providers can see the combined data set. This capability is valuable capability for secondary use of healthcare data.
+
+Customers deploying the types of workloads discussed in this section typically seek assurances from Microsoft that the underlying cloud platform security controls for which Microsoft is responsible are operating effectively. To address the needs of customers across regulated markets worldwide, Azure maintains a comprehensive compliance portfolio based on formal third-party certifications and other types of assurances to help customers meet their own compliance obligations.
+
+## Compliance and certifications
+
+**Azure** has the broadest [compliance coverage](../compliance/index.yml) in the industry, including key independent certifications and attestations such as ISO 27001, ISO 27017, ISO 27018, ISO 22301, ISO 9001, ISO 20000-1, SOC 1/2/3, PCI DSS Level 1, PCI 3DS, HITRUST, CSA STAR Certification, CSA STAR Attestation, US FedRAMP High, Australia IRAP, Germany C5, Japan CS Gold Mark, Singapore MTCS Level 3, Spain ENS High, UK G-Cloud and Cyber Essentials Plus, and many more. Azure compliance portfolio includes more than 90 compliance offerings spanning globally applicable certifications, US Government-specific programs, industry assurances, and regional / country-specific offerings. Government customers can use these offerings when addressing their own compliance obligations across regulated industries and markets worldwide.
+
+When deploying applications that are subject to regulatory compliance obligations on Azure, customers seek assurances that all cloud services comprising the solution are included in the cloud service providerΓÇÖs audit scope. Azure offers industry-leading depth of compliance coverage judged by the number of cloud services in audit scope for each Azure certification. Customers can build and deploy realistic applications and benefit from extensive compliance coverage provided by Azure independent third-party audits.
+
+**Azure Stack Hub** also provides [compliance documentation](https://aka.ms/azurestackcompliance) to help customers integrate Azure Stack Hub into solutions that address regulated workloads. Customers can download the following Azure Stack Hub compliance documents:
+
+- PCI DSS assessment report produced by a third-party Qualified Security Assessor (QSA).
+- Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) assessment report, including Azure Stack Hub control mapping to CCM domains and controls.
+- FedRAMP High System Security Plan (SSP) precompiled template to demonstrate how Azure Stack Hub addresses applicable controls, Customer Responsibility Matrix for the FedRAMP High baseline, and FedRAMP assessment report produced by an independent Third-Party Assessor Organization (3PAO).
+
+**[Azure Blueprints](https://azure.microsoft.com/services/blueprints/)** is a service that helps automate compliance and cybersecurity risk management in cloud environments. For more information on Azure Blueprints, including production-ready blueprint solutions for ISO 27001, NIST SP 800-53, PCI DSS, HITRUST, and other standards, see the [Azure Blueprint guidance](../governance/blueprints/overview.md).
+
+Azure compliance and certification resources are intended to help customers address their own compliance obligations with various regulations. Some governments across the world have already established cloud adoption mandates and the corresponding regulation to facilitate cloud onboarding. However, there are many government customers that still operate traditional on-premises datacenters and are in the process of formulating their cloud adoption strategy. AzureΓÇÖs extensive compliance portfolio can be of assistance to customers irrespective of their cloud adoption maturity level.
+
+## Frequently asked questions
+
+This section addresses common customer questions related to Azure public, private, and hybrid cloud deployment models.
+
+### Data residency and data sovereignty
+
+- **Data location:** How does Microsoft keep data within a specific countryΓÇÖs boundaries? In what cases does data leave? What data attributes leave? **Answer:** Microsoft provides [strong customer commitments](https://azure.microsoft.com/global-infrastructure/data-residency/) regarding cloud services data residency and transfer policies:
+ - **Data storage for regional
+ - **Data storage for non-regional
+- **Sovereign cloud deployment:** Why doesnΓÇÖt Microsoft deploy a sovereign, physically isolated cloud instance in every country that requests it? **Answer:** Microsoft is actively pursuing sovereign cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, break down when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra sovereign cloud or fragmentation within a sovereign cloud. Whereas a sovereign cloud might prove to be the right solution for certain customers, it is not the only option available to worldwide public sector customers.
+- **Sovereign cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** Government customers can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by the customerΓÇÖs own security-cleared, in-country personnel. Customers can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes they use in Azure. With Azure Stack Hub, customers have sole control of their data, including storage, processing, transmission, and remote access.
+- **Local jurisdiction:** Is Microsoft subject to local country jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/about/corporate-responsibility/lerr) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States.
+- **Autarky:** Can Microsoft cloud operations be separated from the Internet or the rest of Microsoft cloud and connected solely to local government network? Are operations possible without external connections to a third party? **Answer:** Yes, depending on the cloud deployment model.
+ - **Public Cloud:** Azure regional datacenters can be connected to local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft is not possible in public cloud.
+ - **Private Cloud:** With Azure Stack Hub, customers have full control over network connectivity and can operate Azure Stack Hub in [fully disconnected mode](/azure-stack/operator/azure-stack-disconnected-deployment).
+- **Data flow restrictions:** What provisions exist for approval and documentation of all data exchange between customer and Microsoft for local, in-country deployed cloud services? **Answer:** Options vary based on the cloud deployment model.
+ - **Private cloud:** For private cloud deployment using Azure Stack Hub, customers can control which data is exchanged with third parties. Azure Stack Hub telemetry can be turned off based on customer preference and Azure Stack Hub can be operated fully disconnected. Moreover, Azure Stack Hub offers the [capacity-based billing model](https://azure.microsoft.com/pricing/details/azure-stack/hub/) in which no billing or consumption data leaves the customerΓÇÖs premises.
+ - **Public cloud:** In Azure public cloud, customers can use [Network Watcher](https://azure.microsoft.com/services/network-watcher/) to monitor network traffic associated with their workloads. For public cloud workloads, all billing data is generated through telemetry used exclusively for billing purposes and sent to Microsoft billing systems. Customers can [download and view](../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md) their billing and usage data; however, they cannot prevent this information from being sent to Microsoft. Microsoft engineers [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to customer data. For customer-initiated support requests, [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure can be used to enable customers to approve/deny elevated requests for customer data access. Moreover, customers have control over data encryption at rest using customer-managed encryption keys.
+- **Patching and maintenance for private cloud:** How can Microsoft support patching and other maintenance for Azure Stack Hub private cloud deployment? **Answer:** Microsoft has a regular cadence in place for releasing [update packages for Azure Stack Hub](/azure-stack/operator/azure-stack-updates). Government customers are sole operators of Azure Stack Hub and they can download and install these update packages. An update alert for Microsoft software updates and hotfixes will appear in the Update blade for Azure Stack Hub instances that are connected to the Internet. If your instance isnΓÇÖt connected and you would like to be notified about each update release, subscribe to the RSS or ATOM feed, as explained in our online documentation.
+
+### Safeguarding of customer data
+
+- **Microsoft network security:** What network controls and security does Microsoft use? Can customer requirements be considered? **Answer:** For insight into Azure infrastructure protection, customers should review Azure [network architecture](../security/fundamentals/infrastructure-network.md), Azure [production network](../security/fundamentals/production-network.md), and Azure [infrastructure monitoring](../security/fundamentals/infrastructure-monitoring.md). Customers deploying Azure applications should review Azure [network security overview](../security/fundamentals/network-overview.md) and [network security best practices](../security/fundamentals/network-best-practices.md). To provide feedback or requirements, contact your Microsoft account representative.
+- **Customer separation:** How does Microsoft logically or physically separate customers within its cloud environment? Is there an option for select customers to ensure complete physical separation? **Answer:** Azure uses [logical isolation](./azure-secure-isolation-guidance.md) to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously enforcing controls designed to keep customers from accessing one another's data or applications. There is also an option to enforce physical compute isolation via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/), which provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. Customers can provision dedicated hosts within a region, availability zone, and fault domain. They can then place VMs directly into provisioned hosts using whatever configuration best meets their needs. Dedicated Host provides hardware isolation at the physical server level, enabling customers to place their Azure VMs on an isolated and dedicated physical server that runs only their organizationΓÇÖs workloads to meet corporate compliance requirements.
+- **Data encryption at rest and in transit:** Does Microsoft enforce data encryption by default? Does Microsoft support customer-managed encryption keys? **Answer:** Yes, many Azure services, including Azure Storage and Azure SQL Database, encrypt data by default and support customer-managed keys. Azure [Storage encryption for data at rest](../storage/common/storage-service-encryption.md) ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. Customers can [use their own encryption keys](../storage/common/customer-managed-keys-configure-key-vault.md) for Azure Storage encryption at rest and manage their keys in Azure Key Vault. Storage encryption is enabled by default for all new and existing storage accounts and it cannot be disabled. When provisioning storage accounts, customers can enforce ΓÇ£[secure transfer required](../storage/common/storage-require-secure-transfer.md)ΓÇ¥ option, which allows access only from secure connections. This option is enabled by default when creating a storage account in the Azure portal. Azure SQL Database enforces [data encryption in transit](../azure-sql/database/security-overview.md#information-protection-and-encryption) by default and provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest [by default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/) allowing customers to use Azure Key Vault and *[bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md)* (BYOK) functionality to control key management tasks including key permissions, rotation, deletion, and so on.
+- **Data encryption during processing:** Can Microsoft protect customer data while it is being processed in memory? **Answer:** Yes, Microsoft [Azure confidential computing](https://azure.microsoft.com/solutions/confidential-compute/) is designed to address this scenario by performing computations in a hardware-based trusted execution environment (TEE, also known as enclave) based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology. The hardware provides a protected container by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so code and data are protected against viewing and modification from outside of TEE.
+- **FIPS 140-2 validation:** Does Microsoft offer FIPS 140-2 Level 3 validated hardware security modules (HSMs) in Azure? **Answer:** Yes, Azure Key Vault [Managed HSM](../key-vault/managed-hsm/overview.md) provides a fully managed, highly available, single-tenant HSM as a service that uses FIPS 140-2 Level 3 validated HSMs (certificate [#3718](https://csrc.nist.gov/projects/cryptographic-module-validation-program/Certificate/3718)). Each Managed HSM instance is bound to a separate security domain controlled by the customer and isolated cryptographically from instances belonging to other customers.
+- **Customer provided crypto:** Can customers bring their own cryptography or encryption hardware? **Answer:** Yes, customers can use their own HSMs deployed on-premises with their own crypto algorithms. However, if customers expect to use customer-managed keys for services integrated with [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) (for example, Azure Storage, SQL Database, Disk encryption, and others), then they need to use hardware security modules (HSMs) and [cryptography supported by Azure Key Vault](../key-vault/keys/about-keys.md).
+- **Access to customer data by Microsoft personnel:** How does Microsoft restrict access to customer data by Microsoft engineers? **Answer:** Microsoft engineers [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to customer data in the cloud. Instead, they are granted access, under management oversight, only when necessary using the [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m). For customer-initiated support requests, [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure provides customers with the capability to control how a Microsoft engineer accesses their data. As part of the support workflow, a Microsoft engineer may require elevated access to customer data. Customer Lockbox for Azure puts the customer in charge of that decision by enabling the customer to approve/deny such elevated requests.
+
+### Operations
+
+- **Code review:** What can Microsoft do to help ensure that no malicious code has been inserted into the services that customers use? Can customers review Microsoft code deployments? **Answer:** Microsoft has full control over all source code that comprises Azure services. The procedure for patching guest VMs differs greatly from traditional on-premises patching where patch verification is necessary following installation. In Azure, patches are not applied to guest VMs; instead, the VM is simply restarted and when the VM boots, it is guaranteed to boot from a known good image that Microsoft controls. There is no way to insert malicious code into the image or interfere with the boot process. PaaS VMs offer more advanced protection against persistent malware infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it more difficult for a compromise to persist. Customers cannot review Azure source code; however, online access to view source code is available for key products through the Microsoft [Government Security Program](https://www.microsoft.com/securityengineering/gsp) (GSP).
+- **DevOps personnel (cleared nationals):** What controls or clearance levels does Microsoft have for the personnel that have DevOps access to cloud environments or physical access to data centers? **Answer:** Microsoft conducts [background screening](./documentation-government-plan-security.md#screening) on operations personnel with access to production systems and physical data center infrastructure. Microsoft cloud background check includes verification of education and employment history upon hire, and extra checks conducted every two years thereafter (where permissible by law), including criminal history check, OFAC list, BIS denied persons list, and DDTC debarred parties list.
+- **Data center site options:** Is Microsoft willing to deploy a data center to a specific physical location to meet more advanced security requirements? **Answer:** Customers should inquire with their Microsoft account team regarding options for data center locations.
+- **Service availability guarantee:** How do we ensure that Microsoft (or particular government or other entity) canΓÇÖt turn off our cloud services? **Answer:** Customers should review the Microsoft [Online Services Terms](http://www.microsoftvolumelicensing.com/Downloader.aspx?documenttype=OST&lang=English) (OST) and the OST [Data Protection Addendum](https://aka.ms/DPA) (DPA) for contractual commitments Microsoft makes regarding service availability and use of online services.
+- **Non-traditional cloud service needs:** What is the recommended approach for managing scenarios where Azure services are required in periodically internet free/disconnected environments? **Answer:** In addition to [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) which is intended for on-premises deployment and disconnected scenarios, a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
+
+### Transparency and audit
+
+- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes all independent third-party audit reports and other related documentation available to customers under a non-disclosure agreement from the Azure portal. You will need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access Azure Security Center [audit reports blade](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade).
+- **Process auditability:** Does Microsoft make its processes, data flow, and documentation available to customers or regulators for audit? **Answer:** Yes, Microsoft offers a Regulator Right to Examine, which is a program Microsoft implemented to provide regulators with direct right to examine Azure, including the ability to conduct an on-site examination, to meet with Microsoft personnel and Microsoft external auditors, and to access any related information, records, reports, and documents.
+- **Service documentation:** Can Microsoft provide in-depth documentation covering service architecture, software and hardware components, and data protocols? **Answer:** Yes, Microsoft provides extensive and in-depth Azure online documentation covering all these topics. For example, customers can review documentation on Azure [products](../index.yml), [global infrastructure](https://azure.microsoft.com/global-infrastructure/), and [API reference](/rest/api/azure/).
+
+## Next steps
+
+Learn more about:
+
+- [Azure Security](../security/index.yml)
+- [Azure Compliance](../compliance/index.yml)
+- [Azure guidance for secure isolation](./azure-secure-isolation-guidance.md)
+- [Azure for government - worldwide government](https://azure.microsoft.com/industries/government/)
azure-government Documentation Government Plan Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-plan-security.md
description: Customer guidance and best practices for securing their workloads.
Previously updated : 1/27/2021 Last updated : 02/26/2021
Mitigating risk and meeting regulatory obligations are driving the increasing fo
- Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware. - Client-side encryption that enables customers to manage and store keys on-premises or in another secure location. Client-side encryption is built into the Java and .NET storage client libraries, which can utilize Azure Key Vault APIs, making the implementation straightforward. Use Azure Key Vault to obtain access to the secrets in Azure Key Vault for specific individuals using Azure Active Directory.
-Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
+Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
### Encryption at rest
-Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage Service Encryption and Azure Disk Encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
+Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage service encryption and Azure disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
### Encryption in transit
-Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
+Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
The basic encryption available for connectivity to Azure Government supports Transport Layer Security (TLS) 1.2 protocol and X.509 certificates. Federal Information Processing Standard (FIPS) 140-2 validated cryptographic algorithms are also used for infrastructure network connections between Azure Government datacenters. Windows, Windows Server, and Azure File shares can use SMB 3.0 for encryption between the VM and the file share. Use client-side encryption to encrypt the data before it is transferred into storage in a client application, and to decrypt the data after it is transferred out of storage. ### Best practices for encryption -- IaaS VMs: Use Azure Disk Encryption. Turn on Storage Service Encryption to encrypt the VHD files that are used to back up those disks in Azure Storage. This approach only encrypts newly written data, which means that, if you create a VM and then enable Storage Service Encryption on the storage account that holds the VHD file, only the changes will be encrypted, not the original VHD file.-- Client-side encryption: Represents the most secure method for encrypting your data, because it encrypts it before transit, and encrypts the data at rest. However, it does require that you add code to your applications using storage, which you might not want to do. In those cases, you can use HTTPs for your data in transit, and Storage Service Encryption to encrypt the data at rest. Client-side encryption also involves more load on the client that you have to account for in your scalability plans, especially if you are encrypting and transferring much data.
+- **IaaS VMs:** Use Azure disk encryption. Turn on Storage service encryption to encrypt the VHD files that are used to back up those disks in Azure Storage. This approach only encrypts newly written data, which means that, if you create a VM and then enable Storage service encryption on the storage account that holds the VHD file, only the changes will be encrypted, not the original VHD file.
+- **Client-side encryption:** Represents the most secure method for encrypting your data, because it encrypts it before transit, and encrypts the data at rest. However, it does require that you add code to your applications using storage, which you might not want to do. In those cases, you can use HTTPS for your data in transit, and Storage service encryption to encrypt the data at rest. Client-side encryption also involves more load on the client that you have to account for in your scalability plans, especially if you are encrypting and transferring much data.
## Managing secrets
-Proper protection and management of encryption keys is essential for data security. Customers should strive to simplify key management and maintain control of keys used by cloud applications and services to encrypt data. [Azure Key Vault](../key-vault/general/overview.md) is a multi-tenant key management service that Microsoft recommends for managing and controlling access to encryption keys when seamless integration with Azure services is required. Azure Key Vault enables customers to store their encryption keys in Hardware Security Modules (HSMs) that are FIPS 140-2 validated. For more information, see [Data encryption key management with Azure Key Vault](./azure-secure-isolation-guidance.md#azure-key-vault)
+Proper protection and management of encryption keys is essential for data security. Customers should strive to simplify key management and maintain control of keys used by cloud applications and services to encrypt data. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables customers to store their encryption keys in hardware security modules (HSMs) that are FIPS 140-2 validated. For more information, see [Data encryption key management](./azure-secure-isolation-guidance.md#data-encryption-key-management).
### Best practices for managing secrets
Proper protection and management of encryption keys is essential for data securi
## Understanding isolation
-Isolation in Azure Government is achieved through the implementation of trust boundaries, segmentation, and containers to limit data access to only authorized users, services, and applications. Azure Government supports environment and tenant isolation controls and capabilities.
+Isolation in Azure Government is achieved through the implementation of trust boundaries, segmentation, and containers to limit data access to only authorized users, services, and applications. Azure Government supports environment and tenant isolation controls and capabilities.
### Environment isolation
-The Azure Government multi-tenant cloud platform environment is an Internet standards-based Autonomous System (AS) that is physically isolated and separately administered from the rest of Azure public cloud. The AS as defined by [IETF RFC 4271](https://datatracker.ietf.org/doc/rfc4271/) is comprised of a set of switches and routers under a single technical administration, using an interior gateway protocol and common metrics to route packets within the AS, and using an exterior gateway protocol to route packets to other ASs though a single and clearly defined routing policy. In addition, Azure Government for DoD regions within Azure Government are geographically separated physical instances of compute, storage, SQL, and supporting services that store and/or process customer content in accordance with DoD Cloud Computing Security Requirements Guide (SRG) [Section 5.2.2.3](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.2LegalConsiderations) requirements.
+The Azure Government multi-tenant cloud platform environment is an Internet standards-based Autonomous System (AS) that is physically isolated and separately administered from the rest of Azure public cloud. The AS as defined by [IETF RFC 4271](https://datatracker.ietf.org/doc/rfc4271/) is comprised of a set of switches and routers under a single technical administration, using an interior gateway protocol and common metrics to route packets within the AS, and using an exterior gateway protocol to route packets to other ASs though a single and clearly defined routing policy. In addition, Azure Government for DoD regions within Azure Government are geographically separated physical instances of compute, storage, SQL, and supporting services that store and/or process customer content in accordance with DoD Cloud Computing Security Requirements Guide (SRG) [Section 5.2.2.3](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.2LegalConsiderations) requirements.
The isolation of the Azure Government environment is achieved through a series of physical and logical controls, and associated capabilities that include:
The isolation of the Azure Government environment is achieved through a series o
- Specific credentials and multi-factor authentication for logical access - Infrastructure for Azure Government is located within the United States
-Within the Azure Government network, internal network system components are isolated from other system components through implementation of separate subnets and access control policies on management interfaces. Azure Government does not directly peer with the public internet or with the Microsoft corporate network. Azure Government directly peers to the commercial Microsoft Azure network, which has routing and transport capabilities to the Internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by applying extra protections and communications capabilities of our commercial Azure network. In addition, Azure Government ExpressRoute (ER) uses peering with our customerΓÇÖs networks over non-Internet private circuits to route ER customer ΓÇ£DMZΓÇ¥ networks using specific Border Gateway Protocol (BGP)/AS peering as a trust boundary for application routing and associated policy enforcement.
+Within the Azure Government network, internal network system components are isolated from other system components through implementation of separate subnets and access control policies on management interfaces. Azure Government does not directly peer with the public internet or with the Microsoft corporate network. Azure Government directly peers to the commercial Microsoft Azure network, which has routing and transport capabilities to the Internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by applying extra protections and communications capabilities of our commercial Azure network. In addition, Azure Government ExpressRoute (ER) uses peering with our customerΓÇÖs networks over non-Internet private circuits to route ER customer ΓÇ£DMZΓÇ¥ networks using specific Border Gateway Protocol (BGP)/AS peering as a trust boundary for application routing and associated policy enforcement.
-Azure Government maintains a FedRAMP High P-ATO issued by the FedRAMP Joint Authorization Board (JAB), and DoD SRG IL4 and IL5 provisional authorizations.
+Azure Government maintains a FedRAMP High provisional authorization to operate (P-ATO) issued by the FedRAMP Joint Authorization Board (JAB), and DoD SRG IL4 and IL5 provisional authorizations.
### Tenant isolation
-Separation between customers/tenants is an essential security mechanism for the entire Azure Government multi-tenant cloud platform. Azure Government provides baseline per-customer or tenant isolation controls including isolation of Hypervisor, Root OS, and Guest VMs, isolation of Fabric Controllers, packet filtering, and VLAN isolation. For more information, see [compute isolation](./azure-secure-isolation-guidance.md#compute-isolation).
+Separation between customers/tenants is an essential security mechanism for the entire Azure Government multi-tenant cloud platform. Azure Government provides baseline per-customer or tenant isolation controls including isolation of Hypervisor, Root OS, and Guest VMs, isolation of Fabric Controllers, packet filtering, and VLAN isolation. For more information, see [Compute isolation](./azure-secure-isolation-guidance.md#compute-isolation).
-Customer/tenants can manage their isolation posture to meet individual requirements through network access control and segregation through virtual machines, virtual networks, VLAN isolation, ACLs, load balancers, and IP filters. Additionally, customers/tenants can further manage isolation levels for their resources across subscriptions, resource groups, virtual networks, and subnets. The customer/tenant logical isolation controls help prevent one tenant from interfering with the operations of any other customer/tenant.
+Customer/tenants can manage their isolation posture to meet individual requirements through network access control and segregation through virtual machines, virtual networks, VLAN isolation, ACLs, load balancers, and IP filters. Additionally, customers/tenants can further manage isolation levels for their resources across subscriptions, resource groups, virtual networks, and subnets. The customer/tenant logical isolation controls help prevent one tenant from interfering with the operations of any other customer/tenant.
## Screening
-All Azure and Azure Government employees in the United States are subject to Microsoft background checks, as outlined in the table below. Personnel with the ability to access customer data for troubleshooting purposes in Azure Government are additionally subject to the verification of U.S. citizenship and extra screening requirements where appropriate.
+All Azure and Azure Government employees in the United States are subject to Microsoft background checks, as outlined in the table below. Personnel with the ability to access customer data for troubleshooting purposes in Azure Government are additionally subject to the verification of U.S. citizenship and extra screening requirements where appropriate.
-We are now screening all our operators at a Tier 3 Investigation (formerly National Agency Check with Law and Credit, NACLC) as defined in DoD SRG [Section 5.6.2.2](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.6PhysicalFacilitiesandPersonnelRequirements):
+We are now screening all our operators at a Tier 3 Investigation (formerly National Agency Check with Law and Credit, NACLC) as defined in the DoD SRG [Section 5.6.2.2](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.6PhysicalFacilitiesandPersonnelRequirements):
> [!NOTE] > The minimum background investigation required for CSP personnel having access to Level 4 and 5 information based on a ΓÇ£noncritical-sensitiveΓÇ¥ (e.g., DoDΓÇÖs ADP-2) is a Tier 3 Investigation (for ΓÇ£noncritical-sensitiveΓÇ¥ contractors), or a Moderate Risk Background Investigation (MBI) for a ΓÇ£moderate riskΓÇ¥ position designation.
For Azure operations personnel, the following access principles apply:
- Access is just-in-time (JIT), and is granted on a per-incident basis or for a specific maintenance event, and for a limited duration. - Access is rule-based, with defined roles that are only assigned the permissions required for troubleshooting.
-Screening standards include the validation of US citizenship of all Microsoft support and operational staff before access is granted to Azure Government-hosted systems. Support personnel who need to transfer data use the secure capabilities within Azure Government. Secure data transfer requires a separate set of authentication credentials to gain access.
+Screening standards include the validation of US citizenship of all Microsoft support and operational staff before access is granted to Azure Government-hosted systems. Support personnel who need to transfer data use the secure capabilities within Azure Government. Secure data transfer requires a separate set of authentication credentials to gain access.
+
+## Restrictions on insider access
+
+Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to customerΓÇÖs systems and data. Microsoft provides strong [customer commitments](https://www.microsoft.com/trust-center/privacy/data-access) regarding who can access customer data and on what terms. Access to customer data by Microsoft operations and support personnel is **denied by default**. Access to customer data is not needed to operate Azure. Moreover, for most support scenarios involving customer troubleshooting tickets, access to customer data is not needed.
+
+No default access rights and Just-in-Time (JIT) access provisions reduce greatly the risks associated with traditional on-premises administrator elevated access rights that typically persist throughout the duration of employment. Microsoft makes it considerably more difficult for malicious insiders to tamper with customer applications and data. The same access control restrictions and processes are imposed on all Microsoft engineers, including both full-time employees and subprocessors/vendors. The following controls are in place to restrict insider access to customer data:
+
+- Internal Microsoft controls that prevent access to production systems unless it is authorized through **Just-in-Time (JIT)** privileged access management system, as described in this section.
+- Enforcement of **Customer Lockbox** that puts customers in charge of approving insider access in support and troubleshooting scenarios, as described in this section. For most support scenarios, access to customer data is not required.
+- **Data encryption** with option for customer-managed encryption keys ΓÇô encrypted data is accessible only by entities who are in possession of the key, as described previously.
+- **Customer monitoring** of external access to their provisioned Azure resources, which includes security alerts as described in the next section.
+
+Moreover, all Azure and Azure Government employees in the United States are subject to Microsoft background checks, as described in the previous section.
+
+### Access control requirements
+
+Microsoft takes strong measures to protect customer data from inappropriate access or use by unauthorized persons. Microsoft engineers (including full-time employees and subprocessors/vendors) [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to customer data in the cloud. Instead, they are granted access, under management oversight, only when necessary. Using the [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m), access to customer data is carefully controlled, logged, and revoked when it is no longer needed. For example, access to customer data may be required to resolve customer-initiated troubleshooting requests. The access control requirements are [established by the following policy](../security/fundamentals/protection-customer-data.md):
+
+- No access to customer data, by default.
+- No user or administrator accounts on customer virtual machines (VMs).
+- Grant the least privilege that is required to complete task, audit, and log access requests.
+
+Microsoft engineers can be granted access to customer data using temporary credentials via **Just-in-Time (JIT)** access. There must be an incident logged in the Azure Incident Management system that describes the reason for access, approval record, what data was accessed, etc. This approach ensures that there is appropriate oversight for all access to customer data and that all JIT actions (consent and access) are logged for audit. Evidence that procedures have been established for granting temporary access for Azure personnel to customer data and applications upon appropriate approval for customer support or incident handling purposes is available from the Azure [SOC 2 Type 2 attestation report](https://aka.ms/azuresoc2auditreport) produced by an independent third-party auditing firm.
+
+JIT access works with multi-factor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they do not have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
+
+### Customer Lockbox
+
+[Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) is a service that provides customers with the capability to control how a Microsoft engineer accesses their data. As part of the support workflow, a Microsoft engineer may require elevated access to customer data. Customer Lockbox puts the customer in charge of that decision by enabling the customer to approve / deny such elevated requests. Customer Lockbox is an extension of the JIT workflow and comes with full audit logging enabled. Customer Lockbox capability is not required for support cases that do not involve access to customer data. For most support scenarios, access to customer data is not needed and the workflow should not require Customer Lockbox. Microsoft engineers rely heavily on logs to maintain Azure services and provide customer support.
+
+Customer Lockbox is available to all customers who have an Azure support plan with a minimum level of Developer. You can enable Customer Lockbox from the [Administration module](https://aka.ms/customerlockbox/administration) in the Customer Lockbox blade. A Microsoft engineer will initiate Customer Lockbox request if this action is needed to progress a customer-initiated support ticket. Customer Lockbox is available to customers from all Azure public regions.
+
+### Guest VM memory crash dumps
+
+On each Azure node, there is a Hypervisor that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as described in [Compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation). Each node also has one special Root VM, which runs the Host OS.
+
+When a Guest VM (also known as customer VM) crashes, customer data may be contained inside a memory dump file on the Guest VM. **By default, Microsoft engineers do not have access to Guest VMs and cannot review crash dumps on Guest VMs without customerΓÇÖs approval.** The same process involving explicit customer authorization is used to control access to Guest VM crash dumps should the customer request an investigation of their VM crash. As described previously, access is gated by the JIT privileged access management system and Customer Lockbox so that all actions are logged and audited. The primary forcing function for deleting the memory dumps from Guest VMs is the routine process of VM reimaging that typically occurs at least every two months.
+
+### Data deletion, retention, and destruction
+
+Customers are [always in control of their customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. They can access, extract, and delete their customer data stored in Azure at will. When a customer terminates their Azure subscription, Microsoft takes the necessary steps to ensure that the customer continues to own their customer data. A common customer concern upon data deletion or subscription termination is whether another customer or Azure administrator can access their deleted data. For more information on how data deletion, retention, and destruction are implemented in Azure, see our online documentation:
+
+- [Data deletion](./azure-secure-isolation-guidance.md#data-deletion)
+- [Data retention](./azure-secure-isolation-guidance.md#data-retention)
+- [Data destruction](./azure-secure-isolation-guidance.md#data-destruction)
+
+## Customer monitoring of Azure resources
+
+Listed below are essential Azure services that customers can use to gain in-depth insight into their provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at their applications and data. For a complete list, see the Azure service directory sections for [Management + Governance](https://azure.microsoft.com/services/#management-tools), [Networking](https://azure.microsoft.com/services/#networking), and [Security](https://azure.microsoft.com/services/#security). Moreover, the [Azure Security Benchmark](../security/benchmarks/index.yml) provides security recommendations and implementation details to help customers improve the security posture with respect to Azure resources.
+
+**[Azure Security Center](../security-center/index.yml)** provides unified security management and advanced threat protection across hybrid cloud workloads. It is an essential service for customers to limit their exposure to threats, protect cloud resources, [respond to incidents](../security-center/security-center-alerts-overview.md), and improve their regulatory compliance posture.
+
+With Azure Security Center, customers can:
+
+- Monitor security across on-premises and cloud workloads.
+- Apply advanced analytics and threat intelligence to detect attacks.
+- Use access and application controls to block malicious activity.
+- Find and fix vulnerabilities before they can be exploited.
+- Simplify investigation when responding to threats.
+- Apply policy to ensure compliance with security standards.
+
+To assist customers with Azure Security Center usage, Microsoft has published extensive [online documentation](../security-center/index.yml) and numerous blog posts covering specific security topics:
+
+- [How Azure Security Center detects a Bitcoin mining attack](https://azure.microsoft.com/blog/how-azure-security-center-detects-a-bitcoin-mining-attack/)
+- [How Azure Security Center detects DDoS attack using cyber threat intelligence](https://azure.microsoft.com/blog/how-azure-security-center-detects-ddos-attack-using-cyber-threat-intelligence/)
+- [How Azure Security Center aids in detecting good applications being used maliciously](https://azure.microsoft.com/blog/how-azure-security-center-aids-in-detecting-good-applications-being-used-maliciously/)
+- [How Azure Security Center unveils suspicious PowerShell attack](https://azure.microsoft.com/blog/how-azure-security-center-unveils-suspicious-powershell-attack/)
+- [How Azure Security Center helps reveal a cyber attack](https://azure.microsoft.com/blog/how-azure-security-center-helps-reveal-a-cyberattack/)
+- [How Azure Security Center helps analyze attacks using Investigation and Log Search](https://azure.microsoft.com/blog/how-azure-security-center-helps-analyze-attacks-using-investigation-and-log-search/)
+- [Azure Security Center adds context alerts to aid threat investigation](https://azure.microsoft.com/blog/azure-security-center-adds-context-alerts-to-aid-threat-investigation/)
+- [How Azure Security Center automates the detection of cyber attack](https://azure.microsoft.com/blog/how-azure-security-center-automates-the-detection-of-cyber-attack/)
+- [Heuristic DNS detections in Azure Security Center](https://azure.microsoft.com/blog/heuristic-dns-detections-in-azure-security-center/)
+- [Detect the latest ransomware threat (Bad Rabbit) with Azure Security Center](https://azure.microsoft.com/blog/detect-the-latest-ransomware-threat-aka-bad-rabbit-with-azure-security-center/)
+- [Petya ransomware prevention & detection in Azure Security Center](https://azure.microsoft.com/blog/petya-ransomware-prevention-detection-in-azure-security-center/)
+- [Detecting in-memory attacks with Sysmon and Azure Security Center](https://azure.microsoft.com/blog/detecting-in-memory-attacks-with-sysmon-and-azure-security-center/)
+- [How Security Center and Log Analytics can be used for threat hunting](https://azure.microsoft.com/blog/ways-to-use-azure-security-center-log-analytics-for-threat-hunting/)
+- [How Azure Security Center helps detect attacks against your Linux machines](https://azure.microsoft.com/blog/how-azure-security-center-helps-detect-attacks-against-your-linux-machines/)
+- [Use Azure Security Center to detect when compromised Linux machines attack](https://azure.microsoft.com/blog/leverage-azure-security-center-to-detect-when-compromised-linux-machines-attack/)
+
+**[Azure Monitor](../azure-monitor/overview.md)** helps customers maximize the availability and performance of applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from both cloud and on-premises environments. It helps customers understand how their applications are performing and proactively identifies issues affecting deployed applications and resources they depend on. Azure Monitor integrates the capabilities of Log Analytics and [Application Insights](../azure-monitor/app/app-insights-overview.md) that were previously branded as standalone services.
+
+Azure Monitor collects data from each of the following tiers:
+
+- **Application monitoring data:** Data about the performance and functionality of the code customers have written, regardless of its platform.
+- **Guest OS monitoring data:** Data about the operating system on which customer application is running. The application could be running in Azure, another cloud, or on-premises.
+- **Azure resource monitoring data:** Data about the operation of an Azure resource.
+- **Azure subscription monitoring data:** Data about the operation and management of an Azure subscription and data about the health and operation of Azure itself.
+- **Azure tenant monitoring data:** Data about the operation of tenant-level Azure services, such as Azure Active Directory.
+
+With Azure Monitor, customers can get a 360-degree view of their applications, infrastructure, and network with advanced analytics, dashboards, and visualization maps. Azure Monitor provides intelligent insights and enables better decisions with AI. Customers can analyze, correlate, and monitor data from various sources using a powerful query language and built-in machine learning constructs. Moreover, Azure Monitor provides out-of-the-box integration with popular DevOps, IT Service Management (ITSM), and Security Information and Event Management (SIEM) tools.
+
+**[Azure Policy](../governance/policy/overview.md)** enables effective governance of Azure resources by creating, assigning, and managing policies. These policies enforce various rules over provisioned Azure resources to keep them compliant with specific customer corporate security and privacy standards. For example, one of the built-in policies for Allowed Locations can be used to restrict available locations for new resources to enforce customerΓÇÖs geo-compliance requirements. Azure Policy provides a comprehensive compliance view of all provisioned resources and enables cloud policy management and security at scale.
+
+**[Azure Firewall](../firewall/overview.md)** provides a managed, cloud-based network security service that protects customer Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability that integrates with Azure Monitor for logging and analytics.
+
+**[Network Watcher](../network-watcher/network-watcher-monitoring-overview.md)** allows customers to monitor, diagnose, and gain insights into their Azure virtual network performance and health. With Network Security Group flow logs, customers can gain deeper understanding of their network traffic patterns and collect data for compliance, auditing, and monitoring of their network security profile. Packet capture allows customers to capture traffic to and from their Virtual Machines to diagnose network anomalies and gather network statistics, including information on network intrusions.
+
+**[Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md)** provides extensive Distributed Denial of Service (DDoS) mitigation capability to help customers protect their Azure resources from attacks. Always-on traffic monitoring provides near real-time detection of a DDoS attack, with automatic mitigation of the attack as soon as it is detected. In combination with Web Application Firewall, DDoS Protection defends against a comprehensive set of network layer attacks, including SQL injection, cross-site scripting attacks, and session hijacks. Azure DDoS Protection is integrated with Azure Monitor for analytics and insight.
+
+**[Azure Sentinel](../sentinel/overview.md)** is a cloud-native SIEM platform that uses built-in AI to help customers quickly analyze large volumes of data across an enterprise. Azure Sentinel aggregates data from various sources, including users, applications, servers, and devices running on-premises or in any cloud, letting customers reason over millions of records in a few seconds. With Azure Sentinel, customers can:
+
+- **Collect** data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
+- **Detect** previously uncovered threats and minimize false positives using analytics and unparalleled threat intelligence from Microsoft.
+- **Investigate** threats with AI and hunt suspicious activities at scale, tapping into decades of cybersecurity work at Microsoft.
+- **Respond** to incidents rapidly with built-in orchestration and automation of common tasks.
+
+**[Azure Advisor](../advisor/advisor-overview.md)** helps customers follow best practices to optimize their Azure deployments. It analyzes resource configurations and usage telemetry and then recommends solutions that can help customers improve the cost effectiveness, performance, high availability, and security of Azure resources.
+
+**[Azure Blueprints](../governance/blueprints/overview.md)** is a service that helps customers deploy and update cloud environments in a repeatable manner using composable artifacts such as Azure Resource Manager templates to provision resources, role-based access controls, and policies that adhere to an organizationΓÇÖs standards, patterns, and requirements. Customers can use pre-defined standard blueprints and customize these solutions to meet specific requirements, including data encryption, host and service configuration, network and connectivity configuration, identity, and other security aspects of deployed resources. The overarching goal of Azure Blueprints is to help automate compliance and cybersecurity risk management in cloud environments. For more information on Azure Blueprints, including production-ready blueprint solutions for ISO 27001, NIST SP 800-53, PCI DSS, HITRUST, and other standards, see the [Azure Blueprint samples](../governance/blueprints/samples/index.md).
## Next steps For supplemental information and updates, subscribe to the
-<a href="https://devblogs.microsoft.com/azuregov/">Microsoft Azure Government Blog. </a>
+<a href="https://devblogs.microsoft.com/azuregov/">Microsoft Azure Government Blog. </a>
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-prioritized-routes.md
The following steps show you how to create and display the Map control in a web
}); ```
- In the map `ready` event handler, the traffic flow setting on the map is set to `relative`, which is the speed of the road relative to free-flow. For more traffic options, see [TrafficOptions interface](/javascript/api/azure-maps-control/atlas.trafficoptions?preserve-view=false&view=azure-maps-typescript-latest).
+ In the map `ready` event handler, the traffic flow setting on the map is set to `relative`, which is the speed of the road relative to free-flow. For more traffic options, see [TrafficOptions interface](/javascript/api/azure-maps-control/atlas.trafficoptions).
2. Save the **MapTruckRoute.html** file and refresh the page in your browser. If you zoom into any city, like Los Angeles, you will see that the streets display with current traffic flow data.
In this tutorial, two routes will be calculated and rendered on the map. The fir
This code creates two [GeoJSON Point objects](https://en.wikipedia.org/wiki/GeoJSON) to represent start and end points, which are then added to the data source.
- The last block of code sets the camera view using the latitude and longitude of the start and end points. The start and end points are added to the data source. The bounding box for the start and end points is calculated using the `atlas.data.BoundingBox.fromData` function. This bounding box is used to set the map cameras view over the entire route using the `map.setCamera` function. Padding is added to compensate for the pixel dimensions of the symbol icons. For more information about the Map control's setCamera property, see [setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)](/javascript/api/azure-maps-control/atlas.map?view=azure-maps-typescript-latest#setcamera-cameraoptionscameraboundsoptionsanimationoptions-&preserve-view=false) property.
+ The last block of code sets the camera view using the latitude and longitude of the start and end points. The start and end points are added to the data source. The bounding box for the start and end points is calculated using the `atlas.data.BoundingBox.fromData` function. This bounding box is used to set the map cameras view over the entire route using the `map.setCamera` function. Padding is added to compensate for the pixel dimensions of the symbol icons. For more information about the Map control's setCamera property, see [setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-) property.
3. Save **TruckRoute.html** and refresh your browser. The map is now centered over Seattle. The teardrop blue pin marks the start point. The round blue pin marks the end point.
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-route-location.md
In this tutorial, we'll render the route using a line layer. The start and end p
This code creates two [GeoJSON Point objects](https://en.wikipedia.org/wiki/GeoJSON) to represent start and end points, which are then added to the data source.
- The last block of code sets the camera view using the latitude and longitude of the start and end points. The start and end points are added to the data source. The bounding box for the start and end points is calculated using the `atlas.data.BoundingBox.fromData` function. This bounding box is used to set the map cameras view over the entire route using the `map.setCamera` function. Padding is added to compensate for the pixel dimensions of the symbol icons. For more information about the Map control's setCamera property, see [setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)](/javascript/api/azure-maps-control/atlas.map?view=azure-maps-typescript-latest#setcamera-cameraoptionscameraboundsoptionsanimationoptions-&preserve-view=false) property.
+ The last block of code sets the camera view using the latitude and longitude of the start and end points. The start and end points are added to the data source. The bounding box for the start and end points is calculated using the `atlas.data.BoundingBox.fromData` function. This bounding box is used to set the map cameras view over the entire route using the `map.setCamera` function. Padding is added to compensate for the pixel dimensions of the symbol icons. For more information about the Map control's setCamera property, see [setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-) property.
3. Save **MapRoute.html** and refresh your browser. The map is now centered over Seattle. The teardrop blue pin marks the start point. The round blue pin marks the end point.
azure-monitor Diagnostics Extension Schema Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/diagnostics-extension-schema-windows.md
The *PublicConfig* and *PrivateConfig* are separated because in most JSON usage
<WadCfg> <DiagnosticMonitorConfiguration overallQuotaInMB="10000">
- <PerformanceCounters scheduledTransferPeriod="PT1M", sinks="AzureMonitorSink">
+ <PerformanceCounters scheduledTransferPeriod="PT1M" sinks="AzureMonitorSink">
<PerformanceCounterConfiguration counterSpecifier="\Processor(_Total)\% Processor Time" sampleRate="PT1M" unit="percent" /> </PerformanceCounters>
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
These are the valid `level` values that you can specify in the `applicationinsig
| ALL | ALL | ALL | ALL | > [!NOTE]
-> If an exception is passed to the logger, then the log message (and exception)
+> If an exception object is passed to the logger, then the log message (and exception object details)
> will show up in the Azure portal under the `exceptions` table instead of the `traces` table. ## Auto-collected Micrometer metrics (including Spring Boot Actuator metrics)
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
and second also meets the Application Insights configured threshold.
The best way to know if a particular logging statement meets the logging frameworks' configured threshold is to confirm that it is showing up in your normal application log (e.g. file or console).
-Also note that if an exception is passed to the logger, then the log message (and exception)
+Also note that if an exception object is passed to the logger, then the log message (and exception object details)
will show up in the Azure portal under the `exceptions` table instead of the `traces` table. See the [auto-collected logging configuration](./java-standalone-config.md#auto-collected-logging) for more details.
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-click-analytics-plugin.md
appInsights.loadAppInsights();
- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [NPM Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto-Collection Plugin. - Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions. - Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). See [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871) for additional guidance.-- Build a [Workbook](../visualize/workbooks-overview.md) to create custom visualizations of click data.
+- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md#integrating-queries) to create custom visualizations of click data.
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-react-plugin.md
It will operate like the higher-order component, but respond to Hooks life-cycle
### `useTrackEvent`
-The `useTrackEvent` Hook is used to track any custom event that an application may need to track, such as a button click or other API call. It takes two arguments, the first is the Application Insights instance (which can be obtained from the `useAppInsightsContext` Hook), and a name for the event.
+The `useTrackEvent` Hook is used to track any custom event that an application may need to track, such as a button click or other API call. It takes four arguments:
+- Application Insights instance (which can be obtained from the `useAppInsightsContext` Hook).
+- Name for the event.
+- Event data object that encapsulates the changes that has to be tracked.
+- skipFirstRun (optional) flag to skip calling the `trackEvent` call on initialization. Default value is set to `true`.
```javascript import React, { useState, useEffect } from "react";
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler-troubleshooting.md
## <a id="troubleshooting"></a>General troubleshooting
+### Make sure you're using the appropriate Profiler Endpoint
+
+Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide).
+
+|App Setting | US Government Cloud | China Cloud |
+|||-|
+|ApplicationInsightsProfilerEndpoint | `https://profiler.monitor.azure.us` | `https://profiler.monitor.azure.cn` |
+|ApplicationInsightsEndpoint | `https://dc.applicationinsights.us` | `https://dc.applicationinsights.azure.cn` |
+ ### Profiles are uploaded only if there are requests to your application while Profiler is running Azure Application Insights Profiler collects data for two minutes each hour. It can also collect data when you select the **Profile Now** button in the **Configure Application Insights Profiler** pane.
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler.md
Here are the settings needed to enable the profiler:
You can set these values using [Azure Resource Manager Templates](./azure-web-apps.md#app-service-application-settings-with-azure-resource-manager), [Azure PowerShell](/powershell/module/az.websites/set-azwebapp), [Azure CLI](/cli/azure/webapp/config/appsettings).
-### Enabling Profiler for other clouds manually
+## Enable Profiler for other clouds
-If you want to enable the profiler for other clouds, you can use the below app settings.
+Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide).
-|App Setting | US Government Values| China Cloud |
+|App Setting | US Government Cloud | China Cloud |
|||-| |ApplicationInsightsProfilerEndpoint | `https://profiler.monitor.azure.us` | `https://profiler.monitor.azure.cn` | |ApplicationInsightsEndpoint | `https://dc.applicationinsights.us` | `https://dc.applicationinsights.azure.cn` |
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/sdk-connection-string.md
Connection string consists of a list of settings represented as key-value pairs
Any service can be explicitly overridden in the connection string. - `IngestionEndpoint` (ex: `https://dc.applicationinsights.azure.com`) - `LiveEndpoint` (ex: `https://live.applicationinsights.azure.com`)
- - `ProfilerEndpoint` (ex: `https://profiler.applicationinsights.azure.com`)
- - `SnapshotEndpoint` (ex: `https://snapshot.applicationinsights.azure.com`)
+ - `ProfilerEndpoint` (ex: `https://profiler.monitor.azure.com`)
+ - `SnapshotEndpoint` (ex: `https://snapshot.monitor.azure.com`)
#### Endpoint schema
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/separate-resources.md
Each Application Insights resource comes with metrics that are available out-of-
- If it is okay to have an API key have the same access to data from all components. And 10 API keys are sufficient for the needs across all of them. - If it is okay to have the same smart detection and work item integration settings across all roles.
+> [!NOTE]
+> If you want to consolidate multiple Application Insights Resources, you may point your existing application components to a new, consolidated Application Insights Resource. The telemetry stored in your old resource will not be transfered to the new resource, so only delete the old resource when you have enough telemetry in the new resource for business continuity.
+ ### Other things to keep in mind - You may need to add custom code to ensure that meaningful values are set into the [Cloud_RoleName](./app-map.md?tabs=net#set-or-override-cloud-role-name) attribute. Without meaningful values set for this attribute, *NONE* of the portal experiences will work.
If you use Azure DevOps, you can [get an annotation marker](../../azure-monitor/
## Next steps * [Shared resources for multiple roles](../../azure-monitor/app/app-map.md)
-* [Create a Telemetry Initializer to distinguish A|B variants](../../azure-monitor/app/api-filtering-sampling.md#add-properties)
+* [Create a Telemetry Initializer to distinguish A|B variants](../../azure-monitor/app/api-filtering-sampling.md#add-properties)
azure-monitor Snapshot Debugger Appservice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger-appservice.md
Snapshot Debugger currently supports ASP.NET and ASP.NET Core apps that are running on Azure App Service on Windows service plans.
-We recommend you run your application on the Basic service tier or higher when using snapshot debugger.
+We recommend you run your application on the Basic service tier, or higher, when using snapshot debugger.
+ For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots. ## <a id="installation"></a> Enable Snapshot Debugger
Once you've deployed an app, follow the steps below to enable the snapshot debug
![App Setting for Snapshot Debugger][snapshot-debugger-app-setting]
+## Enable Snapshot Debugger for other clouds
+
+Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide) through the Application Insights Connection String.
+
+|Connection String Property | US Government Cloud | China Cloud |
+|||-|
+|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+For more information about other connection overrides, see [Application Insights documentation](https://docs.microsoft.com/azure/azure-monitor/app/sdk-connection-string?tabs=net#connection-string-with-explicit-endpoint-overrides).
+ ## Disable Snapshot Debugger Follow the same steps as for **Enable Snapshot Debugger**, but switch both switches for Snapshot Debugger to **Off**.
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger-function-app.md
Host file
} ```
+## Enable Snapshot Debugger for other clouds
+
+Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide).
+
+Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true,
+ "agentEndpoint": "https://snapshot.monitor.azure.us"
+ }
+ }
+ }
+}
+```
+
+Below are the supported overrides of the Snapshot Debugger agent endpoint:
+
+|Property | US Government Cloud | China Cloud |
+|||-|
+|AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+ ## Disable Snapshot Debugger To disable Snapshot Debugger in your Function app, you just need to update your `host.json` file by setting to `false` the property `snapshotConfiguration.isEnabled`.
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
If you enabled Application Insights Snapshot Debugger for your application, but
There can be many different reasons why snapshots aren't generated. You can start by running the snapshot health check to identify some of the possible common causes.
+## Make sure you're using the appropriate Snapshot Debugger Endpoint
+
+Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide).
+
+For App Service and applications using the Application Insights SDK, you have to update the connection string using the supported overrides for Snapshot Debugger as defined below:
+
+|Connection String Property | US Government Cloud | China Cloud |
+|||-|
+|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+For more information about other connection overrides, see [Application Insights documentation](https://docs.microsoft.com/azure/azure-monitor/app/sdk-connection-string?tabs=net#connection-string-with-explicit-endpoint-overrides).
+
+For Function App, you have to update the `host.json` using the supported overrides below:
+
+|Property | US Government Cloud | China Cloud |
+|||-|
+|AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true,
+ "agentEndpoint": "https://snapshot.monitor.azure.us"
+ }
+ }
+ }
+}
+```
+ ## Use the snapshot health check Several common problems result in the Open Debug Snapshot not showing up. Using an outdated Snapshot Collector, for example; reaching the daily upload limit; or perhaps the snapshot is just taking a long time to upload. Use the Snapshot Health Check to troubleshoot common problems.
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger.md
Snapshot collection is available for:
The following environments are supported: * [Azure App Service](snapshot-debugger-appservice.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running OS family 4 or later * [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running on Windows Server 2012 R2 or later * [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later
However, in Azure App Services, the Snapshot Collector can deoptimize throwing m
Enable Application Insights Snapshot Debugger for your application: * [Azure App Service](snapshot-debugger-appservice.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) * [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) * [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-livedata-overview.md
Title: View Live Data (preview) with Container insights | Microsoft Docs
+ Title: View Live Data with Container insights | Microsoft Docs
description: This article describes the real-time view of Kubernetes logs, events, and pod metrics without using kubectl in Container insights. Previously updated : 12/17/2020 Last updated : 03/04/2021 # How to view Kubernetes logs, events, and pod metrics in real-time
-Container insights includes the Live Data (preview) feature, which is an advanced diagnostic feature allowing you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to further assist in troubleshooting issues in real-time.
+Container insights includes the Live Data feature, which is an advanced diagnostic feature allowing you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to further assist in troubleshooting issues in real-time.
This article provides a detailed overview and helps you understand how to use this feature.
-For help setting up or troubleshooting the Live Data (preview) feature, review our [setup guide](container-insights-livedata-setup.md). This feature directly access the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
+For help setting up or troubleshooting the Live Data feature, review our [setup guide](container-insights-livedata-setup.md). This feature directly access the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
-## View deployment live logs (preview)
-Use the following procedure to view the live logs for deployments that are part of of AKS clusters that are not monitored by Container insights. If your cluster uses Container insights then use the process below to view the live data for nodes, controllers, containers, and deployments.
+## View AKS resource live logs
+Use the following procedure to view the live logs for pods, deployments, and replica sets with or without Container insights from the AKS resource view.
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource. 2. Select **Workloads** in the **Kubernetes resources** section of the menu.
-3. Select a deployment from the **Deployments** tab.
+3. Select a pod, deployment, replica-set from the respective tab.
-4. Select **Live Logs (preview)** from the deployment's menu.
+4. Select **Live Logs** from the resource's menu.
5. Select a pod to start collection of the live data.
You can view real-time log data as they are generated by the container engine fr
3. Select either the **Nodes**, **Controllers**, or **Containers** tab.
-4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data (preview)** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
>[!NOTE] >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [View in analytics](container-insights-log-search.md#search-logs-to-analyze-data) feature to learn more about viewing historical logs, events and metrics.
-After successfully authenticating, the Live Data (preview) console pane will appear below the performance data grid where you can view log data in a continuous stream. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
+After successfully authenticating, the Live Data console pane will appear below the performance data grid where you can view log data in a continuous stream. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
![Node properties pane view data option](./media/container-insights-livedata-overview/node-properties-pane.png)
The pane title shows the name of the pod the container is grouped with.
## View events
-You can view real-time event data as they are generated by the container engine from the **Nodes**, **Controllers**, **Containers**, and **Deployments (preview)** view when a container, pod, node, ReplicaSet, DaemonSet, job, CronJob or Deployment is selected. To view events, perform the following steps.
+You can view real-time event data as they are generated by the container engine from the **Nodes**, **Controllers**, **Containers**, and **Deployments** view when a container, pod, node, ReplicaSet, DaemonSet, job, CronJob or Deployment is selected. To view events, perform the following steps.
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource. 2. On the AKS cluster dashboard, under **Monitoring** on the left-hand side, choose **Insights**.
-3. Select either the **Nodes**, **Controllers**, **Containers**, or **Deployments (preview)** tab.
+3. Select either the **Nodes**, **Controllers**, **Containers**, or **Deployments** tab.
-4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data (preview)** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
>[!NOTE] >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [View in analytics](container-insights-log-search.md#search-logs-to-analyze-data) feature to learn more about viewing historical logs, events and metrics.
-After successfully authenticating, the Live Data (preview) console pane will appear below the performance data grid. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
+After successfully authenticating, the Live Data console pane will appear below the performance data grid. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
If the object you selected was a container, select the **Events** option in the pane. If you selected a Node, Pod, or controller, viewing events is automatically selected.
You can view real-time metric data as they are generated by the container engine
3. Select either the **Nodes** or **Controllers** tab.
-4. Select a **Pod** object from the performance grid, and on the properties pane found on the right side, select **View live data (preview)** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+4. Select a **Pod** object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
>[!NOTE] >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [View in analytics](container-insights-log-search.md#search-logs-to-analyze-data) feature to learn more about viewing historical logs, events and metrics.
-After successfully authenticating, the Live Data (preview) console pane will appear below the performance data grid. Metric data is retrieved and begins streaming to your console for presentation in the two charts. The pane title shows the name of the pod the container is grouped with.
+After successfully authenticating, the Live Data console pane will appear below the performance data grid. Metric data is retrieved and begins streaming to your console for presentation in the two charts. The pane title shows the name of the pod the container is grouped with.
![View Pod metrics example](./media/container-insights-livedata-overview/pod-properties-live-metrics.png)
After successfully authenticating, the Live Data (preview) console pane will app
The following sections describe functionality that you can use in the different live data views. ### Search
-The Live Data (preview) feature includes search functionality. In the **Search** field, you can filter results by typing a key word or term and any matching results are highlighted to allow quick review. While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to chose from.
+The Live Data feature includes search functionality. In the **Search** field, you can filter results by typing a key word or term and any matching results are highlighted to allow quick review. While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to chose from.
![Live Data console pane filter example](./media/container-insights-livedata-overview/livedata-pane-filter-example.png)
To suspend autoscroll and control the behavior of the pane, allowing you to manu
>We recommend only suspending or pausing autoscroll for a short period of time while troubleshooting an issue. These requests may impact the availability and throttling of the Kubernetes API on your cluster. >[!IMPORTANT]
->No data is stored permanently during operation of this feature. All information captured during the session is deleted when you close your browser or navigate away from it. Data only remains present for visualization inside the five minute window of the metrics feature; any metrics older than five minutes are also deleted. The Live Data (preview) buffer queries within reasonable memory usage limits.
+>No data is stored permanently during operation of this feature. All information captured during the session is deleted when you close your browser or navigate away from it. Data only remains present for visualization inside the five minute window of the metrics feature; any metrics older than five minutes are also deleted. The Live Data buffer queries within reasonable memory usage limits.
## Next steps
azure-monitor Container Insights Log Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-log-search.md
Title: How to Query Logs from Container insights | Microsoft Docs description: Container insights collects metrics and log data and this article describes the records and includes sample queries. Previously updated : 06/01/2020 Last updated : 03/03/2021
In the following table, details of records collected by Container insights are p
| Container node inventory | Kube API | `ContainerNodeInventory`| TimeGenerated, Computer, ClassName_s, DockerVersion_s, OperatingSystem_s, Volume_s, Network_s, NodeRole_s, OrchestratorType_s, InstanceID_g, SourceSystem| | Inventory of pods in a Kubernetes cluster | Kube API | `KubePodInventory` | TimeGenerated, Computer, ClusterId, ContainerCreationTimeStamp, PodUid, PodCreationTimeStamp, ContainerRestartCount, PodRestartCount, PodStartTime, ContainerStartTime, ServiceName, ControllerKind, ControllerName, ContainerStatus, ContainerStatusReason, ContainerID, ContainerName, Name, PodLabel, Namespace, PodStatus, ClusterName, PodIp, SourceSystem | | Inventory of nodes part of a Kubernetes cluster | Kube API | `KubeNodeInventory` | TimeGenerated, Computer, ClusterName, ClusterId, LastTransitionTimeReady, Labels, Status, KubeletVersion, KubeProxyVersion, CreationTimeStamp, SourceSystem |
+|Inventory of persistent volumes in a Kubernetes cluster |Kube API |`KubePVInventory` | TimeGenerated, PVName, PVCapacityBytes, PVCName, PVCNamespace, PVStatus, PVAccessModes, PVType, PVTypeInfo, PVStorageClassName, PVCreationTimestamp, ClusterId, ClusterName, _ResourceId, SourceSystem |
| Kubernetes Events | Kube API | `KubeEvents` | TimeGenerated, Computer, ClusterId_s, FirstSeen_t, LastSeen_t, Count_d, ObjectKind_s, Namespace_s, Name_s, Reason_s, Type_s, TimeGenerated_s, SourceComponent_s, ClusterName_s, Message, SourceSystem | | Services in the Kubernetes cluster | Kube API | `KubeServices` | TimeGenerated, ServiceName_s, Namespace_s, SelectorLabels_s, ClusterId_s, ClusterName_s, ClusterIP_s, ServiceType_s, SourceSystem |
-| Performance metrics for nodes part of the Kubernetes cluster | Usage metrics are obtained from cAdvisor and limits from Kube api | Perf &#124; where ObjectName == "K8SNode" | Computer, ObjectName, CounterName &#40;cpuAllocatableNanoCores, memoryAllocatableBytes, cpuCapacityNanoCores, memoryCapacityBytes, memoryRssBytes, cpuUsageNanoCores, memoryWorkingsetBytes, restartTimeEpoch&#41;, CounterValue, TimeGenerated, CounterPath, SourceSystem |
-| Performance metrics for containers part of the Kubernetes cluster | Usage metrics are obtained from cAdvisor and limits from Kube api | Perf &#124; where ObjectName == "K8SContainer" | CounterName &#40; cpuRequestNanoCores, memoryRequestBytes, cpuLimitNanoCores, memoryWorkingSetBytes, restartTimeEpoch, cpuUsageNanoCores, memoryRssBytes&#41;, CounterValue, TimeGenerated, CounterPath, SourceSystem |
+| Performance metrics for nodes part of the Kubernetes cluster | Usage metrics are obtained from cAdvisor and limits from Kube api | `Perf \| where ObjectName == "K8SNode"` | Computer, ObjectName, CounterName &#40;cpuAllocatableNanoCores, memoryAllocatableBytes, cpuCapacityNanoCores, memoryCapacityBytes, memoryRssBytes, cpuUsageNanoCores, memoryWorkingsetBytes, restartTimeEpoch&#41;, CounterValue, TimeGenerated, CounterPath, SourceSystem |
+| Performance metrics for containers part of the Kubernetes cluster | Usage metrics are obtained from cAdvisor and limits from Kube api | `Perf \| where ObjectName == "K8SContainer"` | CounterName &#40;cpuRequestNanoCores, memoryRequestBytes, cpuLimitNanoCores, memoryWorkingSetBytes, restartTimeEpoch, cpuUsageNanoCores, memoryRssBytes&#41;, CounterValue, TimeGenerated, CounterPath, SourceSystem |
| Custom Metrics ||`InsightsMetrics` | Computer, Name, Namespace, Origin, SourceSystem, Tags<sup>1</sup>, TimeGenerated, Type, Va, _ResourceId | <sup>1</sup> The *Tags* property represents [multiple dimensions](../essentials/data-platform-metrics.md#multi-dimensional-metrics) for the corresponding metric. For more information about the metrics collected and stored in the `InsightsMetrics` table and a description of the record properties, see [InsightsMetrics overview](https://github.com/microsoft/OMS-docker/blob/vishw).
In the following table, details of records collected by Container insights are p
Azure Monitor Logs can help you look for trends, diagnose bottlenecks, forecast, or correlate data that can help you determine whether the current cluster configuration is performing optimally. Pre-defined log searches are provided for you to immediately start using or to customize to return the information the way you want.
-You can perform interactive analysis of data in the workspace by selecting the **View Kubernetes event logs** or **View container logs** option in the preview pane from the **View in analytics** drop-down list. The **Log Search** page appears to the right of the Azure portal page that you were on.
+You can interactively analyze data in the workspace by selecting the **View Kubernetes event logs** or **View container logs** option in the preview pane from the **View in analytics** drop-down list. The **Log Search** page appears to the right of the Azure portal page that you were on.
![Analyze data in Log Analytics](./media/container-insights-analyze/container-health-log-search-example.png)
-The container logs output that's forwarded to your workspace are STDOUT and STDERR. Because Azure Monitor is monitoring Azure-managed Kubernetes (AKS), Kube-system is not collected today because of the large volume of generated data.
+The container logs output that's forwarded to your workspace are STDOUT and STDERR. Because Azure Monitor is monitoring Azure-managed Kubernetes (AKS), Kube-system isn't collected today because of the large volume of generated data.
### Example log search queries It's often useful to build queries that start with an example or two and then modify them to fit your requirements. To help build more advanced queries, you can experiment with the following sample queries:
-| Query | Description |
-|-|-|
-| ContainerInventory<br> &#124; project Computer, Name, Image, ImageTag, ContainerState, CreatedTime, StartedTime, FinishedTime<br> &#124; render table | List all of a container's lifecycle information|
-| KubeEvents_CL<br> &#124; where not(isempty(Namespace_s))<br> &#124; sort by TimeGenerated desc<br> &#124; render table | Kubernetes events|
-| ContainerImageInventory<br> &#124; summarize AggregatedValue = count() by Image, ImageTag, Running | Image inventory |
-| **Select the Line chart display option**:<br> Perf<br> &#124; where ObjectName == "K8SContainer" and CounterName == "cpuUsageNanoCores" &#124; summarize AvgCPUUsageNanoCores = avg(CounterValue) by bin(TimeGenerated, 30m), InstanceName | Container CPU |
-| **Select the Line chart display option**:<br> Perf<br> &#124; where ObjectName == "K8SContainer" and CounterName == "memoryRssBytes" &#124; summarize AvgUsedRssMemoryBytes = avg(CounterValue) by bin(TimeGenerated, 30m), InstanceName | Container memory |
-| InsightsMetrics<br> &#124; where Name == "requests_count"<br> &#124; summarize Val=any(Val) by TimeGenerated=bin(TimeGenerated, 1m)<br> &#124; sort by TimeGenerated asc<br> &#124; project RequestsPerMinute = Val - prev(Val), TimeGenerated <br> &#124; render barchart | Requests Per Minute with Custom Metrics |
+### List all of a container's lifecycle information
+
+```kusto
+ContainerInventory
+| project Computer, Name, Image, ImageTag, ContainerState, CreatedTime, StartedTime, FinishedTime
+| render table
+```
+
+### Kubernetes events
+
+``` kusto
+KubeEvents_CL
+| where not(isempty(Namespace_s))
+| sort by TimeGenerated desc
+| render table
+```
+### Image inventory
+
+``` kusto
+ContainerImageInventory
+| summarize AggregatedValue = count() by Image, ImageTag, Running
+```
+
+### Container CPU
+
+**Select the Line chart display option**
+
+``` kusto
+Perf
+| where ObjectName == "K8SContainer" and CounterName == "cpuUsageNanoCores"
+| summarize AvgCPUUsageNanoCores = avg(CounterValue) by bin(TimeGenerated, 30m), InstanceName
+```
+
+### Container memory
+
+**Select the Line chart display option**
+
+```kusto
+Perf
+| where ObjectName == "K8SContainer" and CounterName == "memoryRssBytes"
+| summarize AvgUsedRssMemoryBytes = avg(CounterValue) by bin(TimeGenerated, 30m), InstanceName
+```
+
+### Requests Per Minute with Custom Metrics
+
+```kusto
+InsightsMetrics
+| where Name == "requests_count"
+| summarize Val=any(Val) by TimeGenerated=bin(TimeGenerated, 1m)
+| sort by TimeGenerated asc<br> &#124; project RequestsPerMinute = Val - prev(Val), TimeGenerated
+| render barchart
+```
## Query Prometheus metrics data
InsightsMetrics
```
-To view Prometheus metrics scraped by Azure Monitor filtered by Namespace, specify "prometheus". Here is a sample query to view Prometheus metrics from the `default` kubernetes namespace.
+To view Prometheus metrics scraped by Azure Monitor filtered by Namespace, specify "prometheus". Here's a sample query to view Prometheus metrics from the `default` kubernetes namespace.
``` InsightsMetrics
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Title: Configure PV monitoring with Container insights | Microsoft Docs description: This article describes how you can configure monitoring Kubernetes clusters with persistent volumes with Container insights. Previously updated : 10/20/2020 Last updated : 03/03/2021 # Configure PV monitoring with Container insights
-Starting with agent version *ciprod10052020*, Container insights integrated agent now supports monitoring PV (persistent volume) usage.
-
+Starting with agent version *ciprod10052020*, Azure Monitor for containers integrated agent now supports monitoring PV (persistent volume) usage. With agent version *ciprod01112021*, the agent supports monitoring PV inventory, including information about the status, storage class, type, access modes, and other details.
## PV metrics
-Container insights automatically starts monitoring PV by collecting the following metrics at 60sec intervals and storing them in the **InsightMetrics** table.
+Container insights automatically starts monitoring PV usage by collecting the following metrics at 60 -sec intervals and storing them in the **InsightMetrics** table.
+
+|Metric name |Metric Dimension (tags) | Metric Description |
+| `pvUsedBytes`|podUID, podName, pvcName, pvcNamespace, capacityBytes, clusterId, clusterName|Used space in bytes for a specific persistent volume with a claim used by a specific pod. `capacityBytes` is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
+
+Learn more about configuring collected PV metrics [here](https://aka.ms/ci/pvconfig).
-|Metric name |Metric dimension (tags) |Description |
-||||
-| `pvUsedBytes`|`container.azm.ms/pv`|Used space in bytes for a specific persistent volume with a claim used by a specific pod. `pvCapacityBytes` is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
+## PV inventory
+
+Azure Monitor for containers automatically starts monitoring PVs by collecting the following information at 60-sec intervals and storing them in the **KubePVInventory** table.
+
+|Data |Data Source| Data Type| Fields|
+|--|--|-|-|
+|Inventory of persistent volumes in a Kubernetes cluster |Kube API |`KubePVInventory` | PVName, PVCapacityBytes, PVCName, PVCNamespace, PVStatus, PVAccessModes, PVType, PVTypeInfo, PVStorageClassName, PVCreationTimestamp, TimeGenerated, ClusterId, ClusterName, _ResourceId |
## Monitor Persistent Volumes
-Container insights includes pre-configured charts for this metric in a workbook for every cluster. You can find the charts in the Persistent Volume tab of the **Workload Details** workbook directly from an AKS cluster by selecting Workbooks from the left-hand pane, and from the **View Workbooks** drop-down list in the Insight. You can also enable a recommended alert for PV usage, as well as query these metrics in Log Analytics.
+Azure Monitor for containers includes pre-configured charts for this usage metric and inventory information in workbook templates for every cluster. You can also enable a recommended alert for PV usage, and query these metrics in Log Analytics.
+
+### Workload Details Workbook
+
+You can find usage charts for specific workloads in the Persistent Volume tab of the **Workload Details** workbook directly from an AKS cluster by selecting Workbooks from the left-hand pane, from the **View Workbooks** drop-down list in the Insights pane, or from the **Reports (preview) tab** in the Insights pane.
+++
+### Persistent Volume Details Workbook
+
+You can find an overview of persistent volume inventory in the **Persistent Volume Details** workbook directly from an AKS cluster by selecting Workbooks from the left-hand pane, from the **View Workbooks** drop-down list in the Insights pane, or from the **Reports (preview)** tab in the Insights pane.
+
-![Azure Monitor PV workload workbook example](./media/container-insights-persistent-volumes/pv-workload-example.PNG)
+### Persistent Volume Usage Recommended Alert
+You can enable a recommended alert to alert you when average PV usage for a pod is above 80%. Learn more about alerting [here](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-metric-alerts) and how to override the default threshold [here](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-metric-alerts#configure-alertable-metrics-in-configmaps).
## Next steps - Learn more about collected PV metrics [here](./container-insights-agent-config.md).
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-monitor Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-baseline.md
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Use Azure CLI to query and discover Azure Monitor resources within your subscriptions. Ensure appropriate (read) permissions in your tenant and enumerate all Azure subscriptions as well as resources within your subscriptions. -- [Azure Monitor CLI](https://docs.microsoft.com/cli/azure/monitor?view=azure-cli-latest&amp;preserve-view=true)
+- [Azure Monitor CLI](https://docs.microsoft.com/cli/azure/monitor)
-- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&preserve-view=true)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Title: What's new in Azure Monitor documentation
-description: Significant updates to Azure Monitor documentation updated each month.
--- Previously updated : 02/10/2021
+ Title: "Azure Monitor docs: What's new for February 1, 2021 - February 28, 2021"
+description: "What's new in the Azure Monitor docs for February 1, 2021 - February 28, 2021."
+ Last updated : 03/04/2021
-# What's new in Azure Monitor documentation?
-This article provides lists Azure Monitor articles that are either new or have been significantly updated. It will be refreshed the first week of each month to include article updates from the previous month.
+# Azure Monitor docs: What's new for February 1, 2021 - February 28, 2021
-## January 2021
+Welcome to what's new in the Azure Monitor docs from February 1, 2021 through February 28, 2021. This article lists some of the significant changes to docs during this period.
-### General
-- [Azure Monitor FAQ](faq.md) - Added entry on device information for Application Insights.
-### Agents
-- [Collecting Event Tracing for Windows (ETW) Events for analysis Azure Monitor Logs](./agents/data-sources-event-tracing-windows.md) - New article.-- [Data Collection Rules in Azure Monitor (preview)](./agents/data-collection-rule-overview.md) - Added links to PowerShell and CLI samples.
+## Alerts
-### Alerts
-- [Configure Azure to connect ITSM tools using Secure Export](./alerts/its