Updates from: 01/14/2021 04:07:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/manage-user-accounts-graph-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/manage-user-accounts-graph-api.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 08/03/2020
+ms.date: 01/13/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -38,85 +38,6 @@ The following user management operations are available in the [Microsoft Graph A
- [Update a user](/graph/api/user-update) - [Delete a user](/graph/api/user-delete)
-## User properties
-
-### Display name property
-
-The `displayName` is the name to display in Azure portal user management for the user, and in the access token Azure AD B2C returns to the application. This property is required.
-
-### Identities property
-
-A customer account, which could be a consumer, partner, or citizen, can be associated with these identity types:
--- **Local** identity - The username and password are stored locally in the Azure AD B2C directory. We often refer to these identities as "local accounts."-- **Federated** identity - Also known as a *social* or *enterprise* accounts, the identity of the user is managed by a federated identity provider like Facebook, Microsoft, ADFS, or Salesforce.-
-A user with a customer account can sign in with multiple identities. For example, username, email, employee ID, government ID, and others. A single account can have multiple identities, both local and social, with the same password.
-
-In the Microsoft Graph API, both local and federated identities are stored in the user `identities` attribute, which is of type [objectIdentity][graph-objectIdentity]. The `identities` collection represents a set of identities used to sign in to a user account. This collection enables the user to sign in to the user account with any of its associated identities.
-
-| Property | Type |Description|
-|:---------------|:--------|:----------|
-|signInType|string| Specifies the user sign-in types in your directory. For local account: `emailAddress`, `emailAddress1`, `emailAddress2`, `emailAddress3`, `userName`, or any other type you like. Social account must be set to `federated`.|
-|issuer|string|Specifies the issuer of the identity. For local accounts (where **signInType** is not `federated`), this property is the local B2C tenant default domain name, for example `contoso.onmicrosoft.com`. For social identity (where **signInType** is `federated`) the value is the name of the issuer, for example `facebook.com`|
-|issuerAssignedId|string|Specifies the unique identifier assigned to the user by the issuer. The combination of **issuer** and **issuerAssignedId** must be unique within your tenant. For local account, when **signInType** is set to `emailAddress` or `userName`, it represents the sign-in name for the user.<br>When **signInType** is set to: <ul><li>`emailAddress` (or starts with `emailAddress` like `emailAddress1`) **issuerAssignedId** must be a valid email address</li><li>`userName` (or any other value), **issuerAssignedId** must be a valid [local part of an email address](https://tools.ietf.org/html/rfc3696#section-3)</li><li>`federated`, **issuerAssignedId** represents the federated account unique identifier</li></ul>|
-
-The following **Identities** property, with a local account identity with a sign-in name, an email address as sign-in, and with a social identity.
-
- ```json
- "identities": [
- {
- "signInType": "userName",
- "issuer": "contoso.onmicrosoft.com",
- "issuerAssignedId": "johnsmith"
- },
- {
- "signInType": "emailAddress",
- "issuer": "contoso.onmicrosoft.com",
- "issuerAssignedId": "jsmith@yahoo.com"
- },
- {
- "signInType": "federated",
- "issuer": "facebook.com",
- "issuerAssignedId": "5eecb0cd"
- }
- ]
- ```
-
-For federated identities, depending on the identity provider, the **issuerAssignedId** is a unique value for a given user per application or development account. Configure the Azure AD B2C policy with the same application ID that was previously assigned by the social provider or another application within the same development account.
-
-### Password profile property
-
-For a local identity, the **passwordProfile** property is required, and contains the user's password. The `forceChangePasswordNextSignIn` property must set to `false`.
-
-For a federated (social) identity, the **passwordProfile** property is not required.
-
-```json
-"passwordProfile" : {
- "password": "password-value",
- "forceChangePasswordNextSignIn": false
- }
-```
-
-### Password policy property
-
-The Azure AD B2C password policy (for local accounts) is based on the Azure Active Directory [strong password strength](../active-directory/authentication/concept-sspr-policy.md) policy. The Azure AD B2C sign-up or sign-in and password reset policies require this strong password strength, and don't expire passwords.
-
-In user migration scenarios, if the accounts you want to migrate have weaker password strength than the [strong password strength](../active-directory/authentication/concept-sspr-policy.md) enforced by Azure AD B2C, you can disable the strong password requirement. To change the default password policy, set the `passwordPolicies` property to `DisableStrongPassword`. For example, you can modify the create user request as follows:
-
-```json
-"passwordPolicies": "DisablePasswordExpiration, DisableStrongPassword"
-```
-
-### Extension properties
-
-Every customer-facing application has unique requirements for the information to be collected. Your Azure AD B2C tenant comes with a built-in set of information stored in properties, such as Given Name, Surname, City, and Postal Code. With Azure AD B2C, you can extend the set of properties stored in each customer account. For more information on defining custom attributes, see [custom attributes](user-flow-custom-attributes.md).
-
-Microsoft Graph API supports creating and updating a user with extension attributes. Extension attributes in the Graph API are named by using the convention `extension_ApplicationClientID_attributename`, where the `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` application (found in **App registrations** > **All Applications** in the Azure portal). Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example:
-
-```json
-"extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyNumber": "212342"
-```
## Code sample: How to programmatically manage user accounts
@@ -207,4 +128,4 @@ For a full index of the Microsoft Graph API operations supported for Azure AD B2
<!-- LINK --> [graph-objectIdentity]: /graph/api/resources/objectidentity
-[graph-user]: (https://docs.microsoft.com/graph/api/resources/user)
\ No newline at end of file
+[graph-user]: (https://docs.microsoft.com/graph/api/resources/user)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/phone-authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/phone-authentication.md
@@ -35,7 +35,7 @@ With phone sign-up and sign-in, the user can sign up for the app using a phone n
> > *&lt;insert: a link to your Privacy Statement&gt;*<br/>*&lt;insert: a link to your Terms of Service&gt;*
-To add your own consent information, customize the following sample and include it in the LocalizedResources for the ContentDefinition used by the self-asserted page with the display control (the Phone-Email-Base.xml file in the phone sign-up & sign-in starter pack):
+To add your own consent information, customize the following sample and include it in the LocalizedResources for the ContentDefinition used by the self-asserted page with the display control (the *Phone_Email_Base.xml* file in the [phone sign-up and sign-in starter pack][starter-pack-phone]):
```xml <LocalizedResources Id="phoneSignUp.en">
@@ -156,4 +156,4 @@ You can find the phone sign-up and sign-in custom policy starter pack (and other
<!-- LINKS - External --> [starter-pack]: https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
-[starter-pack-phone]: https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/phone-number-passwordless
\ No newline at end of file
+[starter-pack-phone]: https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/phone-number-passwordless
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/trustframeworkpolicy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/trustframeworkpolicy.md
@@ -42,7 +42,7 @@ The **TrustFrameworkPolicy** element contains the following attributes:
| PolicyId | Yes | The unique identifier for the policy. This identifier must be prefixed by *B2C_1A_* | | PublicPolicyUri | Yes | The URI for the policy, which is combination of the tenant ID and the policy ID. | | DeploymentMode | No | Possible values: `Production`, or `Development`. `Production` is the default. Use this property to debug your policy. For more information, see [Collecting Logs](troubleshoot-with-application-insights.md). |
-| UserJourneyRecorderEndpoint | No | The endpoint that is used when **DeploymentMode** is set to `Development`. The value must be `urn:journeyrecorder:applicationinsights`. For more information, see [Collecting Logs](troubleshoot-with-application-insights.md). |
+| UserJourneyRecorderEndpoint | No | The endpoint that is used for logging. The value must be set to `urn:journeyrecorder:applicationinsights` if the attribute exists. For more information, see [Collecting Logs](troubleshoot-with-application-insights.md). |
The following example shows how to specify the **TrustFrameworkPolicy** element:
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/user-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-migration.md
@@ -92,7 +92,7 @@ Before you start the migration process, take the opportunity to clean up your di
### Password policy
-If the accounts you're migrating have weaker password strength than the [strong password strength](../active-directory/authentication/concept-sspr-policy.md) enforced by Azure AD B2C, you can disable the strong password requirement. For more information, see [Password policy property](manage-user-accounts-graph-api.md#password-policy-property).
+If the accounts you're migrating have weaker password strength than the [strong password strength](../active-directory/authentication/concept-sspr-policy.md) enforced by Azure AD B2C, you can disable the strong password requirement. For more information, see [Password policy property](user-profile-attributes.md#password-policy-attribute).
## Next steps
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/user-profile-attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-profile-attributes.md
@@ -8,20 +8,24 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: conceptual
-ms.date: 12/07/2020
+ms.date: 01/13/2021
ms.author: mimart ms.subservice: B2C --- # User profile attributes
-Your Azure Active Directory (Azure AD) B2C directory user profile comes with a built-in set of attributes, such as given name, surname, city, postal code, and phone number. You can extend the user profile with your own application data without requiring an external data store. Most of the attributes that can be used with Azure AD B2C user profiles are also supported by Microsoft Graph. This article describes supported Azure AD B2C user profile attributes. It also notes those attributes that are not supported by Microsoft Graph, as well as Microsoft Graph attributes that should not be used with Azure AD B2C.
+Your Azure Active Directory (Azure AD) B2C directory user profile comes with a built-in set of attributes, such as given name, surname, city, postal code, and phone number. You can extend the user profile with your own application data without requiring an external data store.
+
+Most of the attributes that can be used with Azure AD B2C user profiles are also supported by Microsoft Graph. This article describes supported Azure AD B2C user profile attributes. It also notes those attributes that are not supported by Microsoft Graph, as well as Microsoft Graph attributes that should not be used with Azure AD B2C.
> [!IMPORTANT] > You should not use built-in or extension attributes to store sensitive personal data, such as account credentials, government identification numbers, cardholder data, financial account data, healthcare information, or sensitive background information. You can also integrate with external systems. For example, you can use Azure AD B2C for authentication, but delegate to an external customer relationship management (CRM) or customer loyalty database as the authoritative source of customer data. For more information, see the [remote profile](https://github.com/azure-ad-b2c/samples/tree/master/policies/remote-profile) solution.
+## Azure AD user resource type
+ The table below lists the [user resource type](/graph/api/resources/user) attributes that are supported by the Azure AD B2C directory user profile. It gives the following information about each attribute: - Attribute name used by Azure AD B2C (followed by the Microsoft Graph name in parentheses, if different)
@@ -35,8 +39,8 @@ The table below lists the [user resource type](/graph/api/resources/user) attrib
|---------|---------|----------|------------|----------|-------------| |accountEnabled |Boolean|Whether the user account is enabled or disabled: **true** if the account is enabled, otherwise **false**.|Yes|No|Persisted, Output| |ageGroup |String|The user's age group. Possible values: null, Undefined, Minor, Adult, NotAdult.|Yes|No|Persisted, Output|
-|alternativeSecurityId ([Identities](manage-user-accounts-graph-api.md#identities-property))|String|A single user identity from the external identity provider.|No|No|Input, Persisted, Output|
-|alternativeSecurityIds ([Identities](manage-user-accounts-graph-api.md#identities-property))|alternative securityId collection|A collection of user identities from external identity providers.|No|No|Persisted, Output|
+|alternativeSecurityId ([Identities](#identities-attribute))|String|A single user identity from the external identity provider.|No|No|Input, Persisted, Output|
+|alternativeSecurityIds ([Identities](#identities-attribute))|alternative securityId collection|A collection of user identities from external identity providers.|No|No|Persisted, Output|
|city |String|The city in which the user is located. Max length 128.|Yes|Yes|Persisted, Output| |consentProvidedForMinor|String|Whether the consent has been provided for a minor. Allowed values: null, granted, denied, or notRequired.|Yes|No|Persisted, Output| |country |String|The country/region in which the user is located. Example: "US" or "UK". Max length 128.|Yes|Yes|Persisted, Output|
@@ -56,17 +60,17 @@ The table below lists the [user resource type](/graph/api/resources/user) attrib
|mobile (mobilePhone) |String|The primary cellular telephone number for the user. Max length 64.|Yes|No|Persisted, Output| |netId |String|Net ID.|No|No|Persisted, Output| |objectId |String|A globally unique identifier (GUID) that is the unique identifier for the user. Example: 12345678-9abc-def0-1234-56789abcde. Read only, Immutable.|Read only|Yes|Input, Persisted, Output|
-|otherMails |String collection|A list of additional email addresses for the user. Example: ["bob@contoso.com", "Robert@fabrikam.com"].|Yes (Alternate email)|No|Persisted, Output|
+|otherMails |String collection|A list of other email addresses for the user. Example: ["bob@contoso.com", "Robert@fabrikam.com"].|Yes (Alternate email)|No|Persisted, Output|
|password |String|The password for the local account during user creation.|No|No|Persisted| |passwordPolicies |String|Policy of the password. It's a string consisting of different policy name separated by comma. For example, "DisablePasswordExpiration, DisableStrongPassword".|No|No|Persisted, Output| |physicalDeliveryOfficeName (officeLocation)|String|The office location in the user's place of business. Max length 128.|Yes|No|Persisted, Output| |postalCode |String|The postal code for the user's postal address. The postal code is specific to the user's country/region. In the United States of America, this attribute contains the ZIP code. Max length 40.|Yes|No|Persisted, Output| |preferredLanguage |String|The preferred language for the user. Should follow ISO 639-1 Code. Example: "en-US".|No|No|Persisted, Output| |refreshTokensValidFromDateTime|DateTime|Any refresh tokens issued before this time are invalid, and applications will get an error when using an invalid refresh token to acquire a new access token. If this happens, the application will need to acquire a new refresh token by making a request to the authorize endpoint. Read-only.|No|No|Output|
-|signInNames ([Identities](manage-user-accounts-graph-api.md#identities-property)) |String|The unique sign-in name of the local account user of any type in the directory. Use this attribute to get a user with sign-in value without specifying the local account type.|No|No|Input|
-|signInNames.userName ([Identities](manage-user-accounts-graph-api.md#identities-property)) |String|The unique username of the local account user in the directory. Use this attribute to create or get a user with a specific sign-in username. Specifying this in PersistedClaims alone during Patch operation will remove other types of signInNames. If you would like to add a new type of signInNames, you also need to persist existing signInNames.|No|No|Input, Persisted, Output|
-|signInNames.phoneNumber ([Identities](manage-user-accounts-graph-api.md#identities-property)) |String|The unique phone number of the local account user in the directory. Use this attribute to create or get a user with a specific sign-in phone number. Specifying this attribute in PersistedClaims alone during Patch operation will remove other types of signInNames. If you would like to add a new type of signInNames, you also need to persist existing signInNames.|No|No|Input, Persisted, Output|
-|signInNames.emailAddress ([Identities](manage-user-accounts-graph-api.md#identities-property))|String|The unique email address of the local account user in the directory. Use this to create or get a user with a specific sign-in email address. Specifying this attribute in PersistedClaims alone during Patch operation will remove other types of signInNames. If you would like to add a new type of signInNames, you also need to persist existing signInNames.|No|No|Input, Persisted, Output|
+|signInNames ([Identities](#identities-attribute)) |String|The unique sign-in name of the local account user of any type in the directory. Use this attribute to get a user with sign-in value without specifying the local account type.|No|No|Input|
+|signInNames.userName ([Identities](#identities-attribute)) |String|The unique username of the local account user in the directory. Use this attribute to create or get a user with a specific sign-in username. Specifying this in PersistedClaims alone during Patch operation will remove other types of signInNames. If you would like to add a new type of signInNames, you also need to persist existing signInNames.|No|No|Input, Persisted, Output|
+|signInNames.phoneNumber ([Identities](#identities-attribute)) |String|The unique phone number of the local account user in the directory. Use this attribute to create or get a user with a specific sign-in phone number. Specifying this attribute in PersistedClaims alone during Patch operation will remove other types of signInNames. If you would like to add a new type of signInNames, you also need to persist existing signInNames.|No|No|Input, Persisted, Output|
+|signInNames.emailAddress ([Identities](#identities-attribute))|String|The unique email address of the local account user in the directory. Use this to create or get a user with a specific sign-in email address. Specifying this attribute in PersistedClaims alone during Patch operation will remove other types of signInNames. If you would like to add a new type of signInNames, you also need to persist existing signInNames.|No|No|Input, Persisted, Output|
|state |String|The state or province in the user's address. Max length 128.|Yes|Yes|Persisted, Output| |streetAddress |String|The street address of the user's place of business. Max length 1024.|Yes|Yes|Persisted, Output| |strongAuthentication AlternativePhoneNumber<sup>1</sup>|String|The secondary telephone number of the user, used for multi-factor authentication.|Yes|No|Persisted, Output|
@@ -82,6 +86,74 @@ The table below lists the [user resource type](/graph/api/resources/user) attrib
<sup>1 </sup>Not supported by Microsoft Graph<br><sup>2 </sup>For more information, see [MFA phone number attribute](#mfa-phone-number-attribute)<br><sup>3 </sup>Should not be used with Azure AD B2C
+## Display name attribute
+
+The `displayName` is the name to display in Azure portal user management for the user, and in the access token Azure AD B2C returns to the application. This property is required.
+
+## Identities attribute
+
+A customer account, which could be a consumer, partner, or citizen, can be associated with these identity types:
+
+- **Local** identity - The username and password are stored locally in the Azure AD B2C directory. We often refer to these identities as "local accounts."
+- **Federated** identity - Also known as a *social* or *enterprise* accounts, the identity of the user is managed by a federated identity provider like Facebook, Microsoft, ADFS, or Salesforce.
+
+A user with a customer account can sign in with multiple identities. For example, username, email, employee ID, government ID, and others. A single account can have multiple identities, both local and social, with the same password.
+
+In the Microsoft Graph API, both local and federated identities are stored in the user `identities` attribute, which is of type [objectIdentity][graph-objectIdentity]. The `identities` collection represents a set of identities used to sign in to a user account. This collection enables the user to sign in to the user account with any of its associated identities.
+
+| Name | Type |Description|
+|:---------------|:--------|:----------|
+|signInType|string| Specifies the user sign-in types in your directory. For local account: `emailAddress`, `emailAddress1`, `emailAddress2`, `emailAddress3`, `userName`, or any other type you like. Social account must be set to `federated`.|
+|issuer|string|Specifies the issuer of the identity. For local accounts (where **signInType** is not `federated`), this property is the local B2C tenant default domain name, for example `contoso.onmicrosoft.com`. For social identity (where **signInType** is `federated`) the value is the name of the issuer, for example `facebook.com`|
+|issuerAssignedId|string|Specifies the unique identifier assigned to the user by the issuer. The combination of **issuer** and **issuerAssignedId** must be unique within your tenant. For local account, when **signInType** is set to `emailAddress` or `userName`, it represents the sign-in name for the user.<br>When **signInType** is set to: <ul><li>`emailAddress` (or starts with `emailAddress` like `emailAddress1`) **issuerAssignedId** must be a valid email address</li><li>`userName` (or any other value), **issuerAssignedId** must be a valid [local part of an email address](https://tools.ietf.org/html/rfc3696#section-3)</li><li>`federated`, **issuerAssignedId** represents the federated account unique identifier</li></ul>|
+
+The following **Identities** attribute, with a local account identity with a sign-in name, an email address as sign-in, and with a social identity.
+
+ ```json
+ "identities": [
+ {
+ "signInType": "userName",
+ "issuer": "contoso.onmicrosoft.com",
+ "issuerAssignedId": "johnsmith"
+ },
+ {
+ "signInType": "emailAddress",
+ "issuer": "contoso.onmicrosoft.com",
+ "issuerAssignedId": "jsmith@yahoo.com"
+ },
+ {
+ "signInType": "federated",
+ "issuer": "facebook.com",
+ "issuerAssignedId": "5eecb0cd"
+ }
+ ]
+ ```
+
+For federated identities, depending on the identity provider, the **issuerAssignedId** is a unique value for a given user per application or development account. Configure the Azure AD B2C policy with the same application ID that was previously assigned by the social provider or another application within the same development account.
+
+## Password profile property
+
+For a local identity, the **passwordProfile** attribute is required, and contains the user's password. The `forceChangePasswordNextSignIn` attribute must set to `false`.
+
+For a federated (social) identity, the **passwordProfile** attribute is not required.
+
+```json
+"passwordProfile" : {
+ "password": "password-value",
+ "forceChangePasswordNextSignIn": false
+ }
+```
+
+## Password policy attribute
+
+The Azure AD B2C password policy (for local accounts) is based on the Azure Active Directory [strong password strength](../active-directory/authentication/concept-sspr-policy.md) policy. The Azure AD B2C sign-up or sign-in and password reset policies require this strong password strength, and don't expire passwords.
+
+In user migration scenarios, if the accounts you want to migrate have weaker password strength than the [strong password strength](../active-directory/authentication/concept-sspr-policy.md) enforced by Azure AD B2C, you can disable the strong password requirement. To change the default password policy, set the `passwordPolicies` attribute to `DisableStrongPassword`. For example, you can modify the create user request as follows:
+
+```json
+"passwordPolicies": "DisablePasswordExpiration, DisableStrongPassword"
+```
+ ## MFA phone number attribute When using a phone for multi-factor authentication (MFA), the mobile phone is used to verify the user identity. To [add](https://docs.microsoft.com/graph/api/authentication-post-phonemethods) a new phone number programatically, [update](https://docs.microsoft.com/graph/api/b2cauthenticationmethodspolicy-update), [get](https://docs.microsoft.com/graph/api/b2cauthenticationmethodspolicy-get), or [delete](https://docs.microsoft.com/graph/api/phoneauthenticationmethod-delete) the phone number, use MS Graph API [phone authentication method](https://docs.microsoft.com/graph/api/resources/phoneauthenticationmethod).
@@ -90,23 +162,24 @@ In Azure AD B2C [custom policies](custom-policy-overview.md), the phone number i
## Extension attributes
-You'll often need to create your own attributes, as in the following cases:
--- A customer-facing application needs to persist for an attribute like **LoyaltyNumber**.-- An identity provider has a unique user identifier like **uniqueUserGUID** that must be saved.-- A custom user journey needs to persist for a state of a user, like **migrationStatus**.
+Every customer-facing application has unique requirements for the information to be collected. Your Azure AD B2C tenant comes with a built-in set of information stored in properties, such as Given Name, Surname, and Postal Code. With Azure AD B2C, you can extend the set of properties stored in each customer account. For more information, see [Add user attributes and customize user input in Azure Active Directory B2C](configure-user-input.md)
-Azure AD B2C extends the set of attributes stored on each user account. Extension attributes [extend the schema](/graph/extensibility-overview#schema-extensions) of the user objects in the directory. The extension attributes can only be registered on an application object, even though they might contain data for a user. The extension attribute is attached to the application called b2c-extensions-app. Do not modify this application, as it's used by Azure AD B2C for storing user data. You can find this application under Azure Active Directory App registrations.
+Extension attributes [extend the schema](/graph/extensibility-overview#schema-extensions) of the user objects in the directory. The extension attributes can only be registered on an application object, even though they might contain data for a user. The extension attribute is attached to the application called `b2c-extensions-app`. Do not modify this application, as it's used by Azure AD B2C for storing user data. You can find this application under Azure Active Directory App registrations.
> [!NOTE] > - Up to 100 extension attributes can be written to any user account. > - If the b2c-extensions-app application is deleted, those extension attributes are removed from all users along with any data they contain. > - If an extension attribute is deleted by the application, it's removed from all user accounts and the values are deleted.
-> - The underlying name of the extension attribute is generated in the format "Extension_" + Application ID + "_" + Attribute name. For example, if you create an extension attribute LoyaltyNumber, and the b2c-extensions-app Application ID is 831374b3-bd50-41bf-aa54-263ec9e050fc, the underlying extension attribute name will be: extension_831374b3bd5041bfaa54263ec9e050fc_LoyaltyNumber. You use the underlying name when you run Graph API queries to create or update user accounts.
-The following data types are supported when defining a property in a schema extension:
+Extension attributes in the Graph API are named by using the convention `extension_ApplicationClientID_AttributeName`, where the `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` application (found in **App registrations** > **All Applications** in the Azure portal). Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example:
+
+```json
+"extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyNumber": "212342"
+```
-|Property type |Remarks |
+The following data types are supported when defining an attribute in a schema extension:
+
+|Type |Remarks |
|--------------|---------| |Boolean | Possible values: **true** or **false**. | |DateTime | Must be specified in ISO 8601 format. Will be stored in UTC. |
@@ -114,6 +187,8 @@ The following data types are supported when defining a property in a schema exte
|String | 256 characters maximum. | ## Next steps+ Learn more about extension attributes:+ - [Schema extensions](/graph/extensibility-overview#schema-extensions) - [Define custom attributes](user-flow-custom-attributes.md)\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.subservice: app-provisioning ms.workload: identity ms.topic: tutorial
-ms.date: 09/15/2020
+ms.date: 01/12/2021
ms.author: kenwith ms.reviewer: arvinh ms.custom: contperf-fy21q2
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-userdevicesettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-userdevicesettings.md
@@ -70,7 +70,7 @@ Get-MgUserAuthenticationPhoneMethod -UserId balas@contoso.com
Create a mobile phone authentication method for a specific user. ```powershell
-New-MgUserAuthenticationPhoneMethod -UserId balas@contoso.com -phoneType ΓÇ£mobileΓÇ¥ -phoneNumber "+1 7748933135"
+New-MgUserAuthenticationPhoneMethod -UserId balas@contoso.com -phoneType "mobile" -phoneNumber "+1 7748933135"
``` Remove a specific phone method for a user
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-restrict-your-app-to-a-set-of-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
@@ -30,8 +30,8 @@ Tenant administrators and developers can restrict an app to a specific set of us
The option to restrict an app to a specific set of users or security groups in a tenant works with the following types of applications: -- Applications configured for federated single sign-on with SAML-based authentication-- Application proxy applications that use Azure AD pre-authentication
+- Applications configured for federated single sign-on with SAML-based authentication.
+- Application proxy applications that use Azure AD pre-authentication.
- Applications built directly on the Azure AD application platform that use OAuth 2.0/OpenID Connect authentication after a user or admin has consented to that application. > [!NOTE]
@@ -43,50 +43,40 @@ There are two ways to create an application with enabled user assignment. One re
### Enterprise applications (requires the Global Administrator role)
-1. Go to the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator**.
-1. On the top bar, select the signed-in account.
-1. Under **Directory**, select the Azure AD tenant where the app will be registered.
-1. In the navigation on the left, select **Azure Active Directory**. If Azure Active Directory is not available in the navigation pane, follow these steps:
-
- 1. Select **All services** at the top of the main left-hand navigation menu.
- 1. Type in **Azure Active Directory** in the filter search box, and then select the **Azure Active Directory** item from the result.
-
-1. In the **Azure Active Directory** pane, select **Enterprise Applications** from the **Azure Active Directory** left-hand navigation menu.
-1. Select **All Applications** to view a list of all your applications.
-
- If you do not see the application you want show up here, use the various filters at the top of the **All applications** list to restrict the list or scroll down the list to locate your application.
-
-1. Select the application you want to assign a user or security group to from the list.
-1. On the application's **Overview** page, select **Properties** from the applicationΓÇÖs left-hand navigation menu.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a> as a **Global Administrator**.
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **Enterprise Applications** > **All applications**.
+1. Select the application you want to assign a user or a security group to from the list.
+ Use the filters at the top of the window to search for a specific application.
+1. On the application's **Overview** page, under **Manage**, select **Properties**.
1. Locate the setting **User assignment required?** and set it to **Yes**. When this option is set to **Yes**, users in the tenant must first be assigned to this application or they won't be able to sign-in to this application.
-1. Select **Save** to save this configuration change.
+1. Select **Save**.
### App registration
-1. Go to the [**Azure portal**](https://portal.azure.com/).
-1. On the top bar, select the signed-in account.
-1. Under **Directory**, select the Azure AD tenant where the app will be registered.
-1. In the navigation on the left, select **Azure Active Directory**.
-1. In the **Azure Active Directory** pane, select **App Registrations** from the **Azure Active Directory** left-hand navigation menu.
-1. Create or select the app you want to manage. You need to be **Owner** of this app registration.
-1. On the application's **Overview** page, follow the **Managed application in local directory** link under the essentials in the top of the page. This will take you to the _managed Enterprise Application_ of your app registration.
-1. From the navigation blade on the left, select **Properties**.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations**.
+1. Create or select the app you want to manage. You need to be the **Owner** of this application.
+1. On the application's **Overview** page, select the **Managed application in local directory** link in the **Essentials** section.
+1. Under **Manage**, select **Properties**.
1. Locate the setting **User assignment required?** and set it to **Yes**. When this option is set to **Yes**, users in the tenant must first be assigned to this application or they won't be able to sign-in to this application.
-1. Select **Save** to save this configuration change.
+1. Select **Save**.
## Assign users and groups to the app Once you've configured your app to enable user assignment, you can go ahead and assign users and groups to the app.
-1. Select the **Users and groups** pane in the enterprise applicationΓÇÖs left-hand navigation menu.
-1. At the top of the **Users and groups** list, select the **Add user** button to open the **Add Assignment** pane.
-1. Select the **Users** selector from the **Add Assignment** pane.
+1. Under **Manage**, select the **Users and groups** > **Add user/group** .
+1. Select the **Users** selector.
A list of users and security groups will be shown along with a textbox to search and locate a certain user or group. This screen allows you to select multiple users and groups in one go.
-1. Once you are done selecting the users and groups, press the **Select** button on bottom to move to the next part.
+1. Once you are done selecting the users and groups, select **Select**.
1. (Optional) If you have defined App roles in your application, you can use the **Select role** option to assign the selected users and groups to one of the application's roles.
-1. Press the **Assign** button on the bottom to finish the assignments of users and groups to the app.
+1. Select **Assign** to complete the assignments of users and groups to the app.
1. Confirm that the users and groups you added are showing up in the updated **Users and groups** list. ## More information
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/mobile-sso-support-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/mobile-sso-support-overview.md
@@ -19,6 +19,8 @@ ms.author: nichola
Single sign-on (SSO) is a key offering of the Microsoft identity platform and Azure Active Directory, providing easy and secure logins for users of your app. In addition, app protection policies (APP) enable support of the key security policies that keep your user's data safe. Together, these features enable secure user logins and management of your app's data.
+> [!VIDEO https://www.youtube.com/embed/JpeMeTjQJ04]
+ This article explains why SSO and APP are important and provides the high-level guidance for building mobile applications that support these features. This applies for both phone and tablet apps. If you're an IT administrator that wants to deploy SSO across your organization's Azure Active Directory tenant, check out our [guidance for planning a single sign-on deployment](../manage-apps/plan-sso-deployment.md) ## About single sign-on and app protection policies
@@ -74,4 +76,4 @@ Finally, [add the Intune SDK](/mem/intune/developer/app-sdk-get-started) to your
- [Authorization agents and how to enable them](./msal-android-single-sign-on.md) - [Get started with the Microsoft Intune App SDK](/mem/intune/developer/app-sdk-get-started) - [Configure settings for the Intune App SDK](/mem/intune/developer/app-sdk-ios#configure-settings-for-the-intune-app-sdk)-- [Microsoft Intune protected apps](/mem/intune/apps/apps-supported-intune-apps)\ No newline at end of file
+- [Microsoft Intune protected apps](/mem/intune/apps/apps-supported-intune-apps)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-national-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-national-cloud.md
@@ -67,19 +67,21 @@ To enable your MSAL.js application for sovereign clouds:
### Step 1: Register your application
-1. Sign in to the [Azure portal](https://portal.azure.us/).
+1. Sign in to the <a href="https://portal.azure.us/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
To find Azure portal endpoints for other national clouds, see [App registration endpoints](authentication-national-cloud.md#app-registration-endpoints).
-1. If your account gives you access to more than one tenant, select your account in the upper-right corner, and set your portal session to the desired Azure AD tenant.
-1. Go to the [App registrations](https://aka.ms/ra/ff) page on the Microsoft identity platform for developers.
-1. When the **Register an application** page appears, enter a name for your application.
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
1. Under **Supported account types**, select **Accounts in any organizational directory**. 1. In the **Redirect URI** section, select the **Web** platform and set the value to the application's URL based on your web server. See the next sections for instructions on how to set and obtain the redirect URL in Visual Studio and Node. 1. Select **Register**.
-1. On the app **Overview** page, note down the **Application (client) ID** value.
-1. This tutorial requires you to enable the [implicit grant flow](v2-oauth2-implicit-grant-flow.md). In the left pane of the registered application, select **Authentication**.
-1. In **Advanced settings**, under **Implicit grant**, select the **ID tokens** and **Access tokens** check boxes. ID tokens and access tokens are required because this app needs to sign in users and call an API.
+1. On the **Overview** page, note down the **Application (client) ID** value for later use.
+ This tutorial requires you to enable the [implicit grant flow](v2-oauth2-implicit-grant-flow.md).
+1. Under **Manage**, select **Authentication**.
+1. Under **Implicit grant**, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app needs to sign in users and call an API.
1. Select **Save**. ### Step 2: Set up your web server or project
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
@@ -35,7 +35,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the [Azure portal - App registrations](https://aka.ms/aspnetcore-webapp-calls-graph-quickstart-v2).
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetCoreWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application for you in one click. >
@@ -49,10 +49,10 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**. > 1. Enter a **Name** for your application, for example `AspNetCoreWebAppCallsGraph-Quickstart`. Users of your app might see this name, and you can change it later.
-> 1. Enter a **Redirect URI** of `https://localhost:44321/signin-oidc`
+> 1. Enter a **Redirect URI** of `https://localhost:44321/signin-oidc`.
> 1. Select **Register**. > 1. Under **Manage**, select **Authentication**.
-> 1. Enter a **Logout URL** of `https://localhost:44321/signout-oidc`
+> 1. Enter a **Logout URL** of `https://localhost:44321/signout-oidc`.
> 1. Select **Save**. > 1. Under **Manage**, select **Certificates & secrets** > **New client secret**. > 1. Enter a **Description**, for example `clientsecret1`.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-core-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
@@ -35,7 +35,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the [Azure portal - App registrations](https://aka.ms/aspnetcore2-1-aad-quickstart-v2).
+> 1. Go to the <a href="https://aka.ms/aspnetcore2-1-aad-quickstart-v2/" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application for you in one click. >
@@ -49,11 +49,11 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**. > 1. Enter a **Name** for your application, for example `AspNetCore-Quickstart`. Users of your app might see this name, and you can change it later.
-> 1. Enter a **Redirect URI** of `https://localhost:44321/`
+> 1. Enter a **Redirect URI** of `https://localhost:44321/`.
> 1. Select **Register**. > 1. Under **Manage**, select **Authentication**.
-> 1. Under **Redirect URIs**, select **Add URI**, and then enter `https://localhost:44321/signin-oidc`
-> 1. Enter a **Logout URL** of `https://localhost:44321/signout-oidc`
+> 1. Under **Redirect URIs**, select **Add URI**, and then enter `https://localhost:44321/signin-oidc`.
+> 1. Enter a **Logout URL** of `https://localhost:44321/signout-oidc`.
> 1. Under **Implicit grant**, select **ID tokens**. > 1. Select **Save**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
@@ -35,7 +35,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the new [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs) pane.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application for you in one click. >
@@ -50,7 +50,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Under **Manage**, select **App registrations** > **New registration**. > 1. Enter a **Name** for your application, for example `ASPNET-Quickstart`. Users of your app might see this name, and you can change it later. > 1. Add `https://localhost:44368/` in **Redirect URI**, and select **Register**.
-> 1. From the left navigation pane under the Manage section, select **Authentication**
+> 1. Under **Manage**, select **Authentication**.
> 1. Under the **Implicit Grant** sub-section, select **ID tokens**. > 1. Select **Save**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-ios.md
@@ -43,7 +43,7 @@ The quickstart applies to both iOS and macOS apps. Some steps are needed only fo
> ### Option 1: Register and auto configure your app and then download the code sample > #### Step 1: Register your application > To register your app,
-> 1. Go to the new [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/IosQuickstartPage/sourceType/docs) pane.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/IosQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-java-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-webapp.md
@@ -34,7 +34,7 @@ To run this sample, you need:
> > ### Option 1: Register and automatically configure your app, and then download the code sample >
-> 1. Go to the [Azure portal > **Registration an application**](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaQuickstartPage/sourceType/docs) quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application, and then select **Register**. > 1. Follow the instructions in the portal's quickstart experience to download the automatically configured application code. >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript.md
@@ -35,9 +35,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1 (Express): Register and auto configure your app and then download your code sample >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a> by using either a work or school account, or a personal Microsoft account.
-> 1. If your account gives you access to more than one tenant, select the account at the top right, and then set your portal session to the Azure Active Directory (Azure AD) tenant you want to use.
-> 1. Go to the new [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs) pane.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application. > 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-netcore-daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
@@ -36,7 +36,7 @@ This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the new [Azure portal - App registrations](https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs) pane.
+> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-nodejs-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
@@ -39,7 +39,7 @@ In this quickstart, you download and run a code sample that demonstrates how to
1. Select **Register** to create the app. 1. On the app's **Overview** page, find the **Application (client) ID** value and record it for later. You'll need this value to configure the application later in this project. 1. Under **Manage**, select **Authentication**.
-1. Select **Add a platform** > **Web**
+1. Select **Add a platform** > **Web**.
1. In the **Redirect URIs** section, enter `http://localhost:3000/auth/openid/return`. 1. Enter a **Logout URL** `https://localhost:3000`. 1. In the Implicit grant section, check **ID tokens** as this sample requires the [Implicit grant flow](./v2-oauth2-implicit-grant-flow.md) to be enabled to sign-in the user.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-python-daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-daemon.md
@@ -39,7 +39,7 @@ To run this sample, you need:
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the new [Azure portal - App registrations](https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonDaemonQuickstartPage/sourceType/docs) pane.
+> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-python-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-webapp.md
@@ -36,7 +36,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonQuickstartPage/sourceType/docs).
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application. >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-uwp.md
@@ -36,7 +36,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the new [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/UwpQuickstartPage/sourceType/docs) pane.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/UwpQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application for you in one click. >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-windows-desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
@@ -33,7 +33,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the new [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs).
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-asp-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-asp-webapp.md
@@ -357,7 +357,7 @@ To register your application and add your application registration information t
To quickly register your application, follow these steps:
-1. Go to the new [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs) pane.
+1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
1. Enter a name for your application and select **Register**. 1. Follow the instructions to download and automatically configure your new application in a single click.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-windows-desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-windows-desktop.md
@@ -93,7 +93,7 @@ You can register your application in either of two ways.
### Option 1: Express mode You can quickly register your application by doing the following:
-1. Go to the [Azure portal - Application Registration](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs).
+1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
1. Enter a name for your application and select **Register**. 1. Follow the instructions to download and automatically configure your new application with just one click.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
@@ -162,10 +162,11 @@ Applications are able to note which permissions they require (both delegated and
#### To configure the list of statically requested permissions for an application
-1. Go to your application in the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience, or [create an app](quickstart-register-app.md) if you haven't already.
-2. Locate the **API Permissions** section, and within the API permissions click Add a permission.
-3. Select **Microsoft Graph** from the list of available APIs and then add the permissions that your app requires.
-3. **Save** the app registration.
+1. Go to your application in the <a href="https://go.microsoft.com/fwlink/?linkid=2083908" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+1. Select an application, or [create an app](quickstart-register-app.md) if you haven't already.
+1. On the application's **Overview** page, under **Manage**, select **API Permissions** > **Add a permission**.
+1. Select **Microsoft Graph** from the list of available APIs and then add the permissions that your app requires.
+1. Select **Add Permissions**.
### Recommended: Sign the user into your app
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/faq.md
@@ -141,6 +141,12 @@ See below on how these actions can be rectified.
> >In both cases, you must re-register the device manually on each of these devices. To review whether the device was previously registered, you can [troubleshoot devices using the dsregcmd command](troubleshoot-device-dsregcmd.md).
+---
+
+### Q: I cannot add more than 3 Azure AD user accounts under the same user session on a Windows 10 device, why?
+
+**A**: Azure AD added support for multiple Azure AD accounts in Windows 10 1803 release. However, Windows 10 restricts the number of Azure AD accounts on a device to 3 to limit the size of token requests and enable reliable single sign on (SSO). Once 3 accounts have been added, users will see an error for subsequent accounts. The Additional problem information on the error screen provides the following message indicating the reason - "Add account operation is blocked because accout limit is reached".
+ --- ## Azure AD join FAQ
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/reference-connect-dirsync-deprecated https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-dirsync-deprecated.md
@@ -21,7 +21,7 @@ ms.collection: M365-identity-device-management
--- # Upgrade Windows Azure Active Directory Sync and Azure Active Directory Sync
-Azure AD Connect is the best way to connect your on-premises directory with Azure AD and Microsoft 365. This is a great time to upgrade to Azure AD Connect from Windows Azure Active Directory Sync (DirSync) or Azure AD Sync as these tools are now deprecated and are no longer supported as of April 13, 2017.
+Azure AD Connect is the best way to connect your on-premises directory with Azure AD and Microsoft 365. This is a great time to upgrade to Azure AD Connect from Windows Azure Active Directory Sync (DirSync) or Azure AD Sync (AADSync) as these tools are now deprecated and are no longer supported as of April 13, 2017.
The two identity synchronization tools that are deprecated were offered for single forest customers (DirSync) and for multi-forest and other advanced customers (Azure AD Sync). These older tools have been replaced with a single solution that is available for all scenarios: Azure AD Connect. It offers new functionality, feature enhancements, and support for new scenarios. To be able to continue to synchronize your on-premises identity data to Azure AD and Microsoft 365, we strongly recommend that you upgrade to Azure AD Connect. Microsoft does not guarantee these older versions to work after December 31, 2017.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/troubleshooting-identity-protection-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/troubleshooting-identity-protection-faq.md
@@ -28,7 +28,7 @@ There is a current known issue causing latency in the user risk dismissal flow.
## Frequently asked questions
-### Why is a user is at risk?
+### Why is a user at risk?
If you are an Azure AD Identity Protection customer, go to the [risky users](howto-identity-protection-investigate-risk.md#risky-users) view and click on an at-risk user. In the drawer at the bottom, tab ΓÇÿRisk historyΓÇÖ will show all the events that led to a user risk change. To see all risky sign-ins for the user, click on ΓÇÿUserΓÇÖs risky sign-insΓÇÖ. To see all risk detections for this user, click on ΓÇÿUserΓÇÖs risk detectionsΓÇÖ.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-connectors.md
@@ -20,7 +20,7 @@ Connectors are what make Azure AD Application Proxy possible. They're simple, ea
## What is an Application Proxy connector?
-Connectors are lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. Connectors must be installed on a Windows Server that has access to the backend application. You can organize connectors into connector groups, with each group handling traffic to specific applications.
+Connectors are lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. Connectors must be installed on a Windows Server that has access to the backend application. You can organize connectors into connector groups, with each group handling traffic to specific applications. For more infromation on Application proxy and a diagrammatic representation of application proxy architecture see [Using Azure AD Application Proxy to publish on-premises apps for remote users](what-is-application-proxy.md#application-proxy-connectors)
## Requirements and deployment
@@ -186,4 +186,4 @@ You can examine the state of the service in the Services window. The connector i
- [Publish applications on separate networks and locations using connector groups](application-proxy-connector-groups.md) - [Work with existing on-premises proxy servers](application-proxy-configure-connectors-with-proxy-servers.md) - [Troubleshoot Application Proxy and connector errors](application-proxy-troubleshoot.md)-- [How to silently install the Azure AD Application Proxy Connector](application-proxy-register-connector-powershell.md)\ No newline at end of file
+- [How to silently install the Azure AD Application Proxy Connector](application-proxy-register-connector-powershell.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm.md
@@ -18,7 +18,7 @@ ms.author: barclayn
ms.collection: M365-identity-device-management ---
-# Create, list and delete a user-assigned managed identity using Azure Resource Manager
+# Create, list, and delete a user-assigned managed identity using Azure Resource Manager
Managed identities for Azure resources provide Azure services with a managed identity in Azure Active Directory. You can use this identity to authenticate to services that support Azure AD authentication, without needing credentials in your code.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md
@@ -19,7 +19,7 @@ ms.collection: M365-identity-device-management
ms.custom: devx-track-azurecli ---
-# Create, list or delete a user-assigned managed identity using the Azure CLI
+# Create, list, or delete a user-assigned managed identity using the Azure CLI
Managed identities for Azure resources provide Azure services with a managed identity in Azure Active Directory. You can use this identity to authenticate to services that support Azure AD authentication, without needing credentials in your code.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md
@@ -18,11 +18,11 @@ ms.author: barclayn
ms.collection: M365-identity-device-management ---
-# Create, list, delete or assign a role to a user-assigned managed identity using the Azure portal
+# Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal
Managed identities for Azure resources provide Azure services with a managed identity in Azure Active Directory. You can use this identity to authenticate to services that support Azure AD authentication, without needing credentials in your code.
-In this article, you learn how to create, list, delete or assign a role to a user-assigned managed identity using the Azure portal.
+In this article, you learn how to create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal.
## Prerequisites
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/github-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/07/2020
+ms.date: 12/24/2020
ms.author: jeedes ---
@@ -20,8 +20,6 @@ In this tutorial, you'll learn how to integrate a GitHub Enterprise Cloud **Orga
* Control in Azure AD who has access to your GitHub Enterprise Cloud Organization. * Manage access to your GitHub Enterprise Cloud Organization in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To configure Azure AD integration with a GitHub Enterprise Cloud Organization, you need the following items:
@@ -36,39 +34,39 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* GitHub supports **SP** initiated SSO * GitHub supports [**Automated** user provisioning (organization invitations)](github-provisioning-tutorial.md)
-* Once you configure GitHub you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+ ## Adding GitHub from the gallery To configure the integration of GitHub into Azure AD, you need to add GitHub from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **GitHub** in the search box. 1. Select **GitHub Enterprise Cloud - Organization** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for GitHub
+## Configure and test Azure AD SSO for GitHub
Configure and test Azure AD SSO with GitHub using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in GitHub.
-To configure and test Azure AD SSO with GitHub, complete the following building blocks:
+To configure and test Azure AD SSO with GitHub, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure GitHub SSO](#configure-github-sso)** - to configure the single sign-on settings on application side.
- * **[Create GitHub test user](#create-github-test-user)** - to have a counterpart of B.Simon in GitHub that is linked to the Azure AD representation of user.
+ 1. **[Create GitHub test user](#create-github-test-user)** - to have a counterpart of B.Simon in GitHub that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **GitHub** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **GitHub** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -99,11 +97,6 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
### Create an Azure AD test user
@@ -124,32 +117,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **GitHub**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select a role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
![user role](./media/github-tutorial/user-role.png)
- > [!NOTE]
- > **Select a role** option will be disabled and default role is USER for selected user.
- 7. In the **Add Assignment** dialog, click the **Assign** button. ## Configure GitHub SSO 1. In a different web browser window, sign into your GitHub organization site as an administrator.
-2. Navigate to **Settings** and click **Security**
+2. Navigate to **Settings** and click **Security**.
![Screenshot that shows the GitHub "Organization settings" menu with "Security" selected.](./media/github-tutorial/security.png)
-3. Check the **Enable SAML authentication** box, revealing the Single Sign-on configuration fields. perform the following steps:
+3. Check the **Enable SAML authentication** box, revealing the Single Sign-on configuration fields, perform the following steps:
![Screenshot that shows the "S A M L single sign-on" section with "Enable S A M L authentication" with U R L text boxes highlighted.](./media/github-tutorial/saml-sso.png)
@@ -213,18 +197,14 @@ The objective of this section is to create a user called Britta Simon in GitHub.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the GitHub tile in the Access Panel, you should be automatically signed in to the GitHub for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to GitHub Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to GitHub Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the GitHub tile in the My Apps, this will redirect to GitHub Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try GitHub with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+Once you configure GitHub you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/jamfprosamlconnector-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jamfprosamlconnector-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 02/11/2020
+ms.date: 12/24/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate Jamf Pro with Azure Active Direc
* Automatically sign in your users to Jamf Pro with their Azure AD accounts. * Manage your accounts in one central location: the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [Single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -35,13 +34,12 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Jamf Pro supports **SP-initiated** and **IdP-initiated** SSO.
-* Once you configure Jamf Pro you can enforce Session Control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session Control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
## Add Jamf Pro from the gallery To configure the integration of Jamf Pro into Azure AD, you need to add Jamf Pro from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) by using either a work or school account or your personal Microsoft account.
+1. Sign in to the Azure portal by using either a work or school account or your personal Microsoft account.
1. In the left pane, select the **Azure Active Directory** service. 1. Go to **Enterprise Applications**, and then select **All Applications**. 1. To add a new application, select **New application**.
@@ -65,9 +63,9 @@ In this section, you configure and test Azure AD SSO with Jamf Pro.
In this section, you enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Jamf Pro** application integration page, find the **Manage** section and select **Single Sign-On**.
+1. In the Azure portal, on the **Jamf Pro** application integration page, find the **Manage** section and select **Single Sign-On**.
1. On the **Select a Single Sign-On Method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, select the pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit the Basic SAML Configuration page.](common/edit-urls.png)
@@ -108,15 +106,9 @@ In this section, you grant B.Simon access to Jamf Pro.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Jamf Pro**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![Select Users and groups](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog box.-
- ![Select the Add User button](common/add-assign-user.png)
- 1. In the **Users and groups** dialog box, select **B.Simon** from the Users list, and then select the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user. Then, select the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog box, select the **Assign** button. ## Configure SSO in Jamf Pro
@@ -147,16 +139,16 @@ In this section, you grant B.Simon access to Jamf Pro.
b. Select the **Enable Single Sign-On Authentication** check box.
- c. Select **Azure** as an option from the **Identity Provider** drop-down menu.
+ c. Select **Azure** as an option from the **Identity Provider** drop-down menu.
- d. Copy the **ENTITY ID** value and paste it into the **Identifier (Entity ID)** field in the **Basic SAML Configuration** section in the Azure portal.
+ d. Copy the **ENTITY ID** value and paste it into the **Identifier (Entity ID)** field in the **Basic SAML Configuration** section in the Azure portal.
-> [!NOTE]
-> Use the value in the `<SUBDOMAIN>` field to complete the sign-on URL and reply URL in the **Basic SAML Configuration** section in the Azure portal.
+ > [!NOTE]
+ > Use the value in the `<SUBDOMAIN>` field to complete the sign-on URL and reply URL in the **Basic SAML Configuration** section in the Azure portal.
- e. Select **Metadata URL** from the **Identity Provider Metadata Source** drop-down menu. In the field that appears, paste the **App Federation Metadata Url** value that you've copied from the Azure portal.
+ e. Select **Metadata URL** from the **Identity Provider Metadata Source** drop-down menu. In the field that appears, paste the **App Federation Metadata Url** value that you've copied from the Azure portal.
- f. (Optional) Edit the token expiration value or select "Disable SAML token expiration".
+ f. (Optional) Edit the token expiration value or select "Disable SAML token expiration".
7. On the same page, scroll down to the **User Mapping** section. Then, take the following steps.
@@ -216,16 +208,21 @@ To provision a user account, take the following steps:
## Test the SSO configuration
-In this section, you test your Azure AD single sign-on configuration by using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Jamf Pro Sign on URL where you can initiate the login flow.
+
+* Go to Jamf Pro Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
-When you select the Jamf Pro tile in the Access Panel, you should be automatically signed in to the Jamf Pro account for which you configured SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Jamf Pro for which you set up the SSO
-## Additional resources
+You can also use Microsoft My Apps to test the application in any mode. When you click the Jamf Pro tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Jamf Pro for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Tutorials for integrating SaaS applications with Azure Active Directory ](./tutorial-list.md) -- [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)-- [Try Jamf Pro with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Jamf Pro you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/jira52microsoft-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jira52microsoft-tutorial.md
@@ -9,20 +9,16 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 04/22/2019
+ms.date: 12/28/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with JIRA SAML SSO by Microsoft (V5.2)
-In this tutorial, you learn how to integrate JIRA SAML SSO by Microsoft (V5.2) with Azure Active Directory (Azure AD).
-Integrating JIRA SAML SSO by Microsoft (V5.2) with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate JIRA SAML SSO by Microsoft (V5.2) with Azure Active Directory (Azure AD). When you integrate JIRA SAML SSO by Microsoft (V5.2) with Azure AD, you can:
-* You can control in Azure AD who has access to JIRA SAML SSO by Microsoft (V5.2).
-* You can enable your users to be automatically signed-in to JIRA SAML SSO by Microsoft (V5.2) (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to JIRA SAML SSO by Microsoft (V5.2).
+* Enable your users to be automatically signed-in to JIRA SAML SSO by Microsoft (V5.2) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Description
@@ -55,7 +51,7 @@ To test the steps in this tutorial, you should follow these recommendations:
* JIRA also supports 6.0 to 7.12. For more details, click [JIRA SAML SSO by Microsoft](jiramicrosoft-tutorial.md) > [!NOTE]
-> Please note that our JIRA Plugin also works on Ubuntu Version 16.04
+> Please note that our JIRA Plugin also works on Ubuntu Version 16.04.
## Scenario description
@@ -67,60 +63,36 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of JIRA SAML SSO by Microsoft (V5.2) into Azure AD, you need to add JIRA SAML SSO by Microsoft (V5.2) from the gallery to your list of managed SaaS apps.
-**To add JIRA SAML SSO by Microsoft (V5.2) from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **JIRA SAML SSO by Microsoft (V5.2)**, select **JIRA SAML SSO by Microsoft (V5.2)** from result panel then click **Add** button to add the application.
-
- ![JIRA SAML SSO by Microsoft (V5.2) in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with JIRA SAML SSO by Microsoft (V5.2) based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in JIRA SAML SSO by Microsoft (V5.2) needs to be established.
-
-To configure and test Azure AD single sign-on with JIRA SAML SSO by Microsoft (V5.2), you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure JIRA SAML SSO by Microsoft (V5.2) Single Sign-On](#configure-jira-saml-sso-by-microsoft-v52-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create JIRA SAML SSO by Microsoft (V5.2) test user](#create-jira-saml-sso-by-microsoft-v52-test-user)** - to have a counterpart of Britta Simon in JIRA SAML SSO by Microsoft (V5.2) that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **JIRA SAML SSO by Microsoft (V5.2)** in the search box.
+1. Select **JIRA SAML SSO by Microsoft (V5.2)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-### Configure Azure AD single sign-on
+## Configure and test Azure AD SSO for JIRA SAML SSO by Microsoft (V5.2)
-In this section, you enable Azure AD single sign-on in the Azure portal.
+In this section, you configure and test Azure AD single sign-on with JIRA SAML SSO by Microsoft (V5.2) based on a test user named **Britta Simon**. For single sign-on to work, you must establish a linked relationship between an Azure AD user and the related user in JIRA SAML SSO by Microsoft (V5.2).
-To configure Azure AD single sign-on with JIRA SAML SSO by Microsoft (V5.2), perform the following steps:
+To configure and test Azure AD single sign-on with JIRA SAML SSO by Microsoft (V5.2), perform the following steps:
-1. In the [Azure portal](https://portal.azure.com/), on the **JIRA SAML SSO by Microsoft (V5.2)** application integration page, select **Single sign-on**.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure JIRA SAML SSO by Microsoft (V5.2) SSO](#configure-jira-saml-sso-by-microsoft-v52-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create JIRA SAML SSO by Microsoft (V5.2) test user](#create-jira-saml-sso-by-microsoft-v52-test-user)** - to have a counterpart of Britta Simon in JIRA SAML SSO by Microsoft (V5.2) that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Configure single sign-on link](common/select-sso.png)
+### Configure Azure AD SSO
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **JIRA SAML SSO by Microsoft (V5.2)** application integration page, find the **Manage** section and select **Single sign-on**.
+1. On the **Select a Single sign-on method** page, select **SAML**.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![JIRA SAML SSO by Microsoft (V5.2) Domain and URLs single sign-on information](common/sp-identifier-reply.png)
- a. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<domain:port>/plugins/servlet/saml/auth`
@@ -137,7 +109,31 @@ To configure Azure AD single sign-on with JIRA SAML SSO by Microsoft (V5.2), per
![The Certificate download link](common/copy-metadataurl.png)
-### Configure JIRA SAML SSO by Microsoft (V5.2) Single Sign-On
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to JIRA SAML SSO by Microsoft (V5.2).
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **JIRA SAML SSO by Microsoft (V5.2)**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure JIRA SAML SSO by Microsoft (V5.2) SSO
1. In a different web browser window, sign in to your JIRA instance as an administrator.
@@ -186,57 +182,7 @@ To configure Azure AD single sign-on with JIRA SAML SSO by Microsoft (V5.2), per
i. Click **Save** button to save the settings. > [!NOTE]
- > For more information about installation and troubleshooting, visit [MS JIRA SSO Connector Admin Guide](./ms-confluence-jira-plugin-adminguide.md) and there is also [FAQ](./ms-confluence-jira-plugin-adminguide.md) for your assistance
-
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon\@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com.
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to JIRA SAML SSO by Microsoft (V5.2).
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **JIRA SAML SSO by Microsoft (V5.2)**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **JIRA SAML SSO by Microsoft (V5.2)**.
-
- ![The JIRA SAML SSO by Microsoft (V5.2) link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+ > For more information about installation and troubleshooting, visit [MS JIRA SSO Connector Admin Guide](./ms-confluence-jira-plugin-adminguide.md) and there is also [FAQ](./ms-confluence-jira-plugin-adminguide.md) for your assistance.
### Create JIRA SAML SSO by Microsoft (V5.2) test user
@@ -272,16 +218,17 @@ To enable Azure AD users to sign in to JIRA on-premises server, they must be pro
e. Click **Create user**.
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Click on **Test this application** in Azure portal. This will redirect to JIRA SAML SSO by Microsoft (V5.2) Sign-on URL where you can initiate the login flow.
-When you click the JIRA SAML SSO by Microsoft (V5.2) tile in the Access Panel, you should be automatically signed in to the JIRA SAML SSO by Microsoft (V5.2) for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to JIRA SAML SSO by Microsoft (V5.2) Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+* You can use Microsoft My Apps. When you click the JIRA SAML SSO by Microsoft (V5.2) tile in the My Apps, this will redirect to JIRA SAML SSO by Microsoft (V5.2) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure JIRA SAML SSO by Microsoft (V5.2) you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/jiramicrosoft-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 09/11/2019
+ms.date: 12/28/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate JIRA SAML SSO by Microsoft with
* Enable your users to be automatically signed-in to JIRA SAML SSO by Microsoft with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Description Use your Microsoft Azure Active Directory account with Atlassian JIRA server to enable single sign-on. This way all your organization users can use the Azure AD credentials to sign in into the JIRA application. This plugin uses SAML 2.0 for federation.
@@ -70,18 +68,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of JIRA SAML SSO by Microsoft into Azure AD, you need to add JIRA SAML SSO by Microsoft from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **JIRA SAML SSO by Microsoft** in the search box. 1. Select **JIRA SAML SSO by Microsoft** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for JIRA SAML SSO by Microsoft
+## Configure and test Azure AD SSO for JIRA SAML SSO by Microsoft
Configure and test Azure AD SSO with JIRA SAML SSO by Microsoft using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in JIRA SAML SSO by Microsoft.
-To configure and test Azure AD SSO with JIRA SAML SSO by Microsoft, complete the following building blocks:
+To configure and test Azure AD SSO with JIRA SAML SSO by Microsoft, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -94,9 +92,9 @@ To configure and test Azure AD SSO with JIRA SAML SSO by Microsoft, complete the
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **JIRA SAML SSO by Microsoft** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **JIRA SAML SSO by Microsoft** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -137,15 +135,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **JIRA SAML SSO by Microsoft**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure JIRA SAML SSO by Microsoft SSO
@@ -186,37 +178,37 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
> [!TIP] > Ensure that there is only one certificate mapped against the app so that there is no error in resolving the metadata. If there are multiple certificates, upon resolving the metadata, admin gets an error.
- 1. In the **Metadata URL** textbox, paste **App Federation Metadata Url** value which you have copied from the Azure portal and click the **Resolve** button. It reads the IdP metadata URL and populates all the fields information.
+ a. In the **Metadata URL** textbox, paste **App Federation Metadata Url** value which you have copied from the Azure portal and click the **Resolve** button. It reads the IdP metadata URL and populates all the fields information.
- 1. Copy the **Identifier, Reply URL and Sign on URL** values and paste them in **Identifier, Reply URL and Sign on URL** textboxes respectively in **JIRA SAML SSO by Microsoft Domain and URLs** section on Azure portal.
+ b. Copy the **Identifier, Reply URL and Sign on URL** values and paste them in **Identifier, Reply URL and Sign on URL** textboxes respectively in **JIRA SAML SSO by Microsoft Domain and URLs** section on Azure portal.
- 1. In **Login Button Name** type the name of button your organization wants the users to see on login screen.
+ c. In **Login Button Name** type the name of button your organization wants the users to see on login screen.
- 1. In **Login Button Description** type the description of button your organization wants the users to see on login screen.
+ d. In **Login Button Description** type the description of button your organization wants the users to see on login screen.
- 1. In **SAML User ID Locations** select either **User ID is in the NameIdentifier element of the Subject statement** or **User ID is in an Attribute element**. This ID has to be the JIRA user ID. If the user ID is not matched, then system will not allow users to sign in.
+ e. In **SAML User ID Locations** select either **User ID is in the NameIdentifier element of the Subject statement** or **User ID is in an Attribute element**. This ID has to be the JIRA user ID. If the user ID is not matched, then system will not allow users to sign in.
- > [!Note]
- > Default SAML User ID location is Name Identifier. You can change this to an attribute option and enter the appropriate attribute name.
+ > [!Note]
+ > Default SAML User ID location is Name Identifier. You can change this to an attribute option and enter the appropriate attribute name.
- 1. If you select **User ID is in an Attribute element** option, then in **Attribute name** textbox type the name of the attribute where User ID is expected.
+ f. If you select **User ID is in an Attribute element** option, then in **Attribute name** textbox type the name of the attribute where User ID is expected.
- 1. If you are using the federated domain (like ADFS etc.) with Azure AD, then click on the **Enable Home Realm Discovery** option and configure the **Domain Name**.
+ g. If you are using the federated domain (like ADFS etc.) with Azure AD, then click on the **Enable Home Realm Discovery** option and configure the **Domain Name**.
- 1. In **Domain Name** type the domain name here in case of the ADFS-based login.
+ h. In **Domain Name** type the domain name here in case of the ADFS-based login.
- 1. Check **Enable Single Sign out** if you wish to sign out from Azure AD when a user sign out from JIRA.
+ i. Check **Enable Single Sign out** if you wish to sign out from Azure AD when a user sign out from JIRA.
- 1. Enable **Force Azure Login** checkbox, if you wish to sign in through Azure AD credentials only.
+ j. Enable **Force Azure Login** checkbox, if you wish to sign in through Azure AD credentials only.
- > [!Note]
- > To enable the default login form for admin login on login page when force azure login is enabled, add the query parameter in the browser URL.
- > `https://<domain:port>/login.jsp?force_azure_login=false`
+ > [!Note]
+ > To enable the default login form for admin login on login page when force azure login is enabled, add the query parameter in the browser URL.
+ > `https://<domain:port>/login.jsp?force_azure_login=false`
- 1. Click **Save** button to save the settings.
+ k. Click **Save** button to save the settings.
- > [!NOTE]
- > For more information about installation and troubleshooting, visit [MS JIRA SSO Connector Admin Guide](./ms-confluence-jira-plugin-adminguide.md). There is also an [FAQ](./ms-confluence-jira-plugin-adminguide.md) for your assistance.
+ > [!NOTE]
+ > For more information about installation and troubleshooting, visit [MS JIRA SSO Connector Admin Guide](./ms-confluence-jira-plugin-adminguide.md). There is also an [FAQ](./ms-confluence-jira-plugin-adminguide.md) for your assistance.
### Create JIRA SAML SSO by Microsoft test user
@@ -254,16 +246,15 @@ To enable Azure AD users to sign in to JIRA on-premises server, they must be pro
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the JIRA SAML SSO by Microsoft tile in the Access Panel, you should be automatically signed in to the JIRA SAML SSO by Microsoft for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to JIRA SAML SSO by Microsoft Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to JIRA SAML SSO by Microsoft Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the JIRA SAML SSO by Microsoft tile in the My Apps, this will redirect to JIRA SAML SSO by Microsoft Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try JIRA SAML SSO by Microsoft with Azure AD](https://aad.portal.azure.com/)
+Once you configure JIRA SAML SSO by Microsoft you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/marketo-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/marketo-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 10/04/2020
+ms.date: 01/13/2021
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Marketo
@@ -34,6 +34,9 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Marketo supports **IDP** initiated SSO
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+ ## Adding Marketo from the gallery To configure the integration of Marketo into Azure AD, you need to add Marketo from the gallery to your list of managed SaaS apps.
@@ -45,7 +48,7 @@ To configure the integration of Marketo into Azure AD, you need to add Marketo f
1. In the **Add from the gallery** section, type **Marketo** in the search box. 1. Select **Marketo** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO
+## Configure and test Azure AD SSO for Marketo
In this section, you configure and test Azure AD single sign-on with Marketo based on a test user called **Britta Simon**. For single sign-on to work, a link relationship between an Azure AD user and the related user in Marketo needs to be established.
@@ -53,10 +56,10 @@ For single sign-on to work, a link relationship between an Azure AD user and the
To configure and test Azure AD single sign-on with Marketo, perform the following steps: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD SSO with Britta Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD SSO.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD SSO with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD SSO.
2. **[Configure Marketo SSO](#configure-marketo-sso)** - to configure the SSO settings on application side.
- * **[Create Marketo test user](#create-marketo-test-user)** - to have a counterpart of Britta Simon in Marketo that is linked to the Azure AD representation of user.
+ 1. **[Create Marketo test user](#create-marketo-test-user)** - to have a counterpart of Britta Simon in Marketo that is linked to the Azure AD representation of user.
3. **[Test SSO](#test-sso)** - to verify whether the configuration works. ### Configure Azure AD SSO
@@ -65,13 +68,13 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Marketo** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type the URL:
`https://saml.marketo.com/sp` b. In the **Reply URL** text box, type a URL using the following pattern:
@@ -81,7 +84,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<munchkinid>.marketo.com/` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Relay State. Contact [Marketo Client support team](https://investors.marketo.com/contactus.cfm) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL and Relay State. Contact [Marketo Client support team](https://investors.marketo.com/contactus.cfm) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
@@ -117,7 +120,17 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Marketo SSO
-1. To get Munchkin ID of your application, log in to Marketo using admin credentials and perform following actions:
+1. To automate the configuration within Marketo, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Marketo** will direct you to the Marketo application. From there, provide the admin credentials to sign into Marketo. The browser extension will automatically configure the application for you and automate steps 3-6.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Marketo manually, in a different web browser window, sign in to your Marketo company site as an administrator.
+
+1. To get Munchkin ID of your application, perform the following actions:
a. Log in to Marketo app using admin credentials.
@@ -230,13 +243,13 @@ In this section, you create a user called Britta Simon in Marketo. follow these
8. User receives the email notification and has to click the link and change the password to activate the account.
-### Test SSO
+### Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
-1. Click on Test this application in Azure portal and you should be automatically signed in to the Marketo for which you set up the SSO
+* Click on Test this application in Azure portal and you should be automatically signed in to the Marketo for which you set up the SSO
-1. You can use Microsoft Access Panel. When you click the Marketo tile in the Access Panel, you should be automatically signed in to the Marketo for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Marketo tile in the My Apps, you should be automatically signed in to the Marketo for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md
@@ -163,27 +163,27 @@ You can set up an Azure key vault by using either the Azure portal or the Azure
1. Open the [Azure CLI](/cli/azure/install-azure-cli), and then enter the following command:
- ```shell
+ ```azurecli
az login ``` 1. Create a new key vault by running the following command:
- ```shell
+ ```azurecli
az keyvault create --name "[VAULT_NAME]" --resource-group "[RESOURCE_GROUP]" --location "[REGION]" ``` 1. Add the secrets to the key vault by running the following command:
- ```shell
+ ```azurecli
az keyvault secret set --vault-name "[VAULT_NAME]" --name "[SECRET_NAME]" --value "[SECRET_VALUE]" ``` 1. Register an application with Azure AD by running the following command:
- ```shell
+ ```azurecli
az ad sp create-for-rbac -n "MavericsKeyVault" --skip-assignment > azure-credentials.json ``` 1. Authorize an application to use a secret by running the following command:
- ```shell
+ ```azurecli
az keyvault set-policy --name "[VAULT_NAME]" --spn [APPID] --secret-permissions list get #APPID can be found in the azure-credentials.json generated in the previous step
@@ -239,7 +239,7 @@ Maverics Identity Orchestrator Azure AD Connector supports OpenID Connect and SA
1. Generate a JSON Web Token (JWT) signing key, which is used to protect the Maverics Identity Orchestrator session information, by using the [OpenSSL tool](https://www.openssl.org/source/):
- ```shell
+ ```console
openssl rand 64 | base64 ``` 1. Copy the response to the `jwtSigningKey` config property:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/multi-factor-authentication-end-user-app-passwords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/multi-factor-authentication-end-user-app-passwords.md
@@ -18,12 +18,12 @@ ms.custom: "user-help, seo-update-azuread-jan"
# Manage app passwords for two-step verification
->[!Important]
+> [!Important]
>Your administrator may not allow you to use app passwords. If you don't see **App passwords** as an option, they're not available in your organization. When using app passwords, it's important to remember: -- App passwords are auto-generated, and should be created and entered once per app.
+- App passwords are autogenerated, and should be created and entered once per app.
- There's a limit of 40 passwords per user. If you try to create one after that limit, you'll be prompted to delete an existing password before being allowed to create the new one.
@@ -60,17 +60,17 @@ You can create and delete app passwords from the **Additional security verificat
![Your app password page with the password for your specified app](media/multi-factor-authentication-end-user-app-passwords/mfa-your-app-password-page.png)
-4. From the **App passwords** page, make sure your app is listed.
+4. On the **App passwords** page, make sure your app is listed.
- ![App passwords page, with new app shown in list](media/multi-factor-authentication-end-user-app-passwords/mfa-app-passwords-page-with-new-password.png)
+ ![App passwords page, with new app shown in list](media/multi-factor-authentication-end-user-app-passwords/mfa-app-passwords-page-with-new-password.png)
5. Open the app you created the app password for (for example, Outlook 2010), and then paste the app password when asked for it. You should only have to do this once per app. ### To delete an app password using the App passwords page
-1. From the **App passwords** page, select **Delete** next to the app password you want to delete.
+1. On the **App passwords** page, select **Delete** next to the app password you want to delete.
- ![Delete an app password](media/multi-factor-authentication-end-user-app-passwords/mfa-app-passwords-page-delete.png)
+ ![Screenshot that shows deleting an app password on the App passwords page](media/multi-factor-authentication-end-user-app-passwords/mfa-app-passwords-page-delete.png)
2. Select **Yes** to confirm you want to delete the password, and then select **Close**.
@@ -82,35 +82,35 @@ If you use two-step verification with your work or school account and your Micro
### To create app passwords using the Office 365 portal
-1. Sign in to your work or school account, go to the [My account page](https://portal.office.com), select **Security & privacy**, and then expand **Additional security verification**.
+1. Sign in to your work or school account, go to the [My account page](https://myaccount.microsoft.com), and select **Security info**.
- ![Office portal showing expanded additional security verification area](media/multi-factor-authentication-end-user-app-passwords/mfa-app-passwords-o365-my-account-page.png)
+ ![Office portal showing Security info tab](media/multi-factor-authentication-end-user-app-passwords/mfa-security-info.png)
-2. Select the text that says, **Create and manage app passwords** to open the **App passwords** page.
+2. Select **Add method**, choose **App password** from the dropdown list, and then click **Add**.
- ![App passwords page, with the App passwords tab highlighted](media/multi-factor-authentication-end-user-app-passwords/mfa-app-passwords-page.png)
+ ![Security info page, with the Add a method drowpdown list](media/multi-factor-authentication-end-user-app-passwords/mfa-add-method.png)
-3. Select **Create**, type the name of the app that requires the app password, and then select **Next**.
+3. Enter a name for the app password, and then select **Next**.
- ![Create app passwords page, with name of app that needs password](media/multi-factor-authentication-end-user-app-passwords/mfa-create-app-password-page.png)
+ ![Create app passwords page, with name of the app password](media/multi-factor-authentication-end-user-app-passwords/mfa-enter-app-password-name.png)
-4. Copy the password from the **Your app password** page, and then select **Close**.
+4. Copy the password from the **App password** page, and then select **Done**.
- ![Your app password page with the password for your specified app](media/multi-factor-authentication-end-user-app-passwords/mfa-your-app-password-page.png)
+ ![App password page with the new app password you created](media/multi-factor-authentication-end-user-app-passwords/mfa-copy-app-password.png)
-5. From the **App passwords** page, make sure your app is listed.
+5. On the **Security info** page, make sure your app password is listed.
- ![App passwords page, with new app shown in list](media/multi-factor-authentication-end-user-app-passwords/mfa-app-passwords-page-with-new-password.png)
+ ![Security info page, with new app password shown in list](media/multi-factor-authentication-end-user-app-passwords/mfa-verify-app-password.png)
-6. Open the app you created the app password for (for example, Outlook 2010), and then paste the app password when asked for it. You should only have to do this once per app.
+6. Open the app you created the app password for (for example, Outlook 2016), and then paste the app password when asked for it. You should only have to do this once per app.
-### To delete app passwords using the App passwords page
+### To delete app passwords using the Security info page
-1. From the **App passwords** page, select **Delete** next to the app password you want to delete.
+1. On the **Security info** page, select **Delete** next to the app password you want to delete.
- ![Delete an app password](media/multi-factor-authentication-end-user-app-passwords/mfa-app-passwords-page-delete.png)
+ ![Screenshot that shows deleting an app password on the Security info page](media/multi-factor-authentication-end-user-app-passwords/mfa-delete-app-password.png)
-2. Select **Yes** in the confirmation box, and then select **Close**.
+2. Select **Ok** in the confirmation box.
The app password is successfully deleted.
aks https://docs.microsoft.com/en-us/azure/aks/ingress-tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
@@ -65,7 +65,7 @@ During the installation, an Azure public IP address is created for the ingress c
To get the public IP address, use the `kubectl get service` command. It takes a few minutes for the IP address to be assigned to the service.
-```
+```console
$ kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
@@ -89,7 +89,7 @@ az network dns record-set a add-record \
> [!NOTE] > Optionally, you can configure an FQDN for the ingress controller IP address instead of a custom domain. Note that this sample is for a Bash shell. >
-> ```azurecli-interactive
+> ```bash
> # Public IP address of your ingress controller > IP="MY_EXTERNAL_IP" >
@@ -335,7 +335,7 @@ Next, a certificate resource must be created. The certificate resource defines t
To verify that the certificate was created successfully, use the `kubectl get certificate --namespace ingress-basic` command and verify *READY* is *True*, which may take several minutes.
-```
+```console
$ kubectl get certificate --namespace ingress-basic NAME READY SECRET AGE
@@ -368,7 +368,7 @@ kubectl delete -f cluster-issuer.yaml --namespace ingress-basic
List the Helm releases with the `helm list` command. Look for charts named *nginx* and *cert-manager*, as shown in the following example output:
-```
+```console
$ helm list --namespace ingress-basic NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
@@ -378,7 +378,7 @@ nginx ingress-basic 1 2020-01-15 10:09:45.9826
Uninstall the releases with the `helm uninstall` command. The following example uninstalls the NGINX ingress and cert-manager deployments.
-```
+```console
$ helm uninstall cert-manager nginx --namespace ingress-basic release "cert-manager" uninstalled
@@ -421,7 +421,7 @@ You can also:
- [Create an ingress controller that uses Let's Encrypt to automatically generate TLS certificates with a static public IP address][aks-ingress-static-tls] <!-- LINKS - external -->
-[az-network-dns-record-set-a-add-record]: /cli/azure/network/dns/record-set/a?view=azure-cli-latest#az-network-dns-record-set-a-add-record
+[az-network-dns-record-set-a-add-record]: /cli/azure/network/dns/record-set/#az-network-dns-record-set-a-add-record
[custom-domain]: ../app-service/manage-custom-dns-buy-domain.md#buy-an-app-service-domain [dns-zone]: ../dns/dns-getstarted-cli.md [helm]: https://helm.sh/
app-service https://docs.microsoft.com/en-us/azure/app-service/deploy-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-best-practices.md
@@ -109,7 +109,7 @@ jobs:
The steps listed earlier apply to other automation utilities such as CircleCI or Travis CI. However, you need to use the Azure CLI to update the deployment slots with new image tags in the final step. To use the Azure CLI in your automation script, generate a Service Principal using the following command.
-```shell
+```azurecli
az ad sp create-for-rbac --name "myServicePrincipal" --role contributor \ --scopes /subscriptions/{subscription}/resourceGroups/{resource-group} \ --sdk-auth
app-service https://docs.microsoft.com/en-us/azure/app-service/overview-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-manage-costs.md new file mode 100644
@@ -0,0 +1,172 @@
+---
+title: Plan to manage costs for App Service
+description: Learn how to plan for and manage costs for Azure App Service by using cost analysis in the Azure portal.
+ms.custom: subject-cost-optimization
+ms.service: app-service
+ms.topic: how-to
+ms.date: 01/01/2021
+---
++
+# Plan and manage costs for Azure App Service
+
+<!-- Check out the following published examples:
+- [https://docs.microsoft.com/azure/cosmos-db/plan-manage-costs](https://docs.microsoft.com/azure/cosmos-db/plan-manage-costs)
+- [https://docs.microsoft.com/azure/storage/common/storage-plan-manage-costs](https://docs.microsoft.com/azure/storage/common/storage-plan-manage-costs)
+- [https://docs.microsoft.com/azure/machine-learning/concept-plan-manage-cost](https://docs.microsoft.com/azure/machine-learning/concept-plan-manage-cost)
+-->
+
+<!-- Note for Azure service writer: Links to Cost Management articles are full URLS with the ?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn campaign suffix. Leave those URLs intact. They're used to measure traffic to Cost Management articles.
+-->
+
+<!-- Note for Azure service writer: Modify the following for your service. -->
+
+This article describes how you plan for and manage costs for Azure App Service. First, you use the Azure pricing calculator to help plan for App Service costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. After you've started using App Service resources, use [Cost Management](https://docs.microsoft.com/azure/cost-management-billing/) features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure App Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for App Service, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
+
+## Relevant costs for App Service
+
+App Service runs on Azure infrastructure that accrues cost. It's important to understand that additional infrastructure might accrue cost. You must manage that cost when you make changes to deployed resources.
+
+### Costs that accrue with Azure App Service
+
+Depending on which feature you use in App Service, the following cost-accruing resources may be created:
+
+- **App Service plan** Required to host an App Service app.
+- **Isolated tier** A [Virtual Network](/azure/virtual-network/) is required for an App Service environment.
+- **Backup** A [Storage account](/azure/storage/) is required to make backups.
+- **Diagnostic logs** You can select [Storage account](/azure/storage/) as the logging option, or integrate with [Azure Log Analytics](../azure-monitor/log-query/log-analytics-tutorial.md).
+- **App Service certificates** Certificates you purchase in Azure must be maintained in [Azure Key Vault](/azure/key-vault/).
+
+Other cost resources for App Service are (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/) for details):
+
+- [App Service domains](manage-custom-dns-buy-domain.md) Your subscription is charged for the domain registration on a yearly basis, if you enable automatic renewal.
+- [App Service certificates](configure-ssl-certificate.md#import-an-app-service-certificate) One-time charge at the time of purchase. If you have multiple subdomains to secure, you can reduce cost by purchasing one wildcard certificate instead of multiple standard certificates.
+- [IP-based certificate bindings](configure-ssl-bindings.md#create-binding) The binding is configured on a certificate at the app level. Costs are accrued for each binding. For **Standard** tier and above, the first IP-based binding is not charged.
+
+### Costs that might accrue after resource deletion
+
+When you delete all apps in an App Service plan, the plan continues to accrue charges based on its configured pricing tier and number of instances. To avoid unwanted charges, delete the plan or scale it down to **Free** tier.
+
+After you delete Azure App Service resources, resources from related Azure services might continue to exist. They continue to accrue costs until you delete them. For example:
+
+- The Virtual Network that you created for an **Isolated** tier App Service plan
+- Storage accounts you created to store backups or diagnostic logs
+- Key Vault you created to store App Service certificates
+- Log Analytic namespaces you created to ship diagnostic logs
+- [Instance or stamp reservations](#azure-reservations) for App Service that haven't expired yet
+
+### Using Monetary Credit with Azure App Service
+
+You can pay for Azure App Service charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third-party products and services, including those from the Azure Marketplace.
+
+## Estimate costs
+
+An easy way to estimate and optimize your App Service cost beforehand is by using the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+
+To use the pricing calculator, click **App Service** in the **Products** tab. Then, scroll down to work with the calculator. The following screenshot is an example and doesn't reflect current pricing.
+
+![Example showing estimated cost in the Azure Pricing calculator](media/overview-manage-costs/pricing-calculator.png)
+
+### Review estimated costs in the Azure portal
+
+When you create an App Service app or an App Service plan, you can see the estimated costs.
+
+To create an app and view the estimated price:
+
+1. On the create page, scroll down to **App Service plan**, and click **Create new**.
+1. Specify a name and click **OK**.
+1. Next to **Sku and size**, click **Change size**.
+1. Review the estimated price shown in the summary. The following screenshot is an example and doesn't reflect current pricing.
+
+ ![Review estimated cost for each pricing tier in the portal](media/overview-manage-costs/pricing-estimates.png)
+
+If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../billing/billing-spending-limit.md).
+
+## Optimize costs
+
+At a basic level, App Service apps are charged by the App Service plan that hosts them. The costs associated with your App Service deployment depend on a few main factors:
+
+- **Pricing tier** Otherwise known as the SKU of the App Service plan. Higher tiers provide more CPU cores, memory, storage, or features, or combinations of them.
+- **Instance count** dedicated tiers (Basic and above) can be scaled out, and each scaled out instance accrues costs.
+- **Stamp fee** In the Isolated tier, a flat fee is accrued on your App Service environment, regardless of how many apps or worker instances are hosted.
+
+An App Service plan can host more than one app. Depending on your deployment, you could save costs hosting more apps on one App Service plans (i.e. hosting your apps on fewer App Service plans).
+
+For details, see [App Service plan overview](overview-hosting-plans.md)
+
+### Non-production workloads
+
+To test App Service or your solution while accruing low or minimal cost, you can begin by using the two entry-level pricing tiers, **Free** and **Shared**, which are hosted on shared instances. To test your app on dedicated instances with better performance, you can upgrade to **Basic** tier, which supports both Windows and Linux apps.
+
+> [!NOTE]
+> **Azure Dev/Test Pricing** To test pre-production workloads that require higher tiers (all tiers except for **Isolated**), Visual Studio subscribers can also take advantage of the [Azure Dev/Test Pricing](https://azure.microsoft.com/pricing/dev-test/), which offers significant discounts.
+>
+> Both the **Free** and **Shared** tier, as well as the Azure Dev/Test Pricing discounts, don't carry a financially backed SLA.
+
+### Production workloads
+
+Production workloads come with the recommendation of the dedicated **Standard** pricing tier or above. While the price goes up for higher tiers, it also gives you more memory and storage and higher-performing hardware, giving you higher app density per compute instance. That translates to lower instance count for the same number of apps, and therefore lower cost. In fact, **Premium V3** (the highest non-**Isolated** tier) is the most cost effective way to serve your app at scale. To add to the savings, you can get deep discounts on [Premium V3 reservations](#azure-reservations).
+
+> [!NOTE]
+> **Premium V3** supports both Windows containers and Linux containers.
+
+Once you choose the pricing tier you want, you should minimize the idle instances. In a scale-out deployment, you can waste money on underutilized compute instances. You should [configure autoscaling](../azure-monitor/platform/autoscale-get-started.md), available in **Standard** tier and above. By creating scale-out schedules, as well as metric-based scale-out rules, you only pay for the instances you really need at any given time.
+
+### Azure Reservations
+
+If you plan to utilize a known minimum number of compute instances for one year or more, you should take advantage of **Premium V3** tier and drive down the instance cost drastically by reserving those instances in 1-year or 3-year increments. The monthly cost savings can be as much as 55% per instance. Two types of reservations are possible:
+
+- **Windows (or platform agnostic)** Can apply to Windows or Linux instances in your subscription.
+- **Linux specific** Applies only to Linux instances in your subscription.
+
+The reserved instance pricing applies to the applicable instances in your subscription, up to the number of instances that you reserve. The reserved instances are a billing matter and are not tied to specific compute instances. If you run fewer instances than you reserve at any point during the reservation period, you still pay for the reserved instances. If you run more instances than you reserve at any point during the reservation period, you pay the normal accrued cost for the additional instances.
+
+The **Isolated** tier (App Service environment) also supports 1-year and 3-year reservations at reduced pricing. For more information, see [How reservation discounts apply to Azure App Service Isolated Stamps](../cost-management-billing/reservations/reservation-discount-app-service-isolated-stamp.md).
+
+## Monitor costs
+
+As you use Azure resources with App Service, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days). As soon as App Service use starts, costs are incurred and you can see the costs in [cost analysis](https://docs.microsoft.com/azure/cost-management/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+When you use cost analysis, you view App Service costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
+
+To view App Service costs in cost analysis:
+
+1. Sign in to the Azure portal.
+2. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis.
+3. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled App Service.
+
+Actual monthly costs are shown when you initially open cost analysis. Here's an example showing all monthly usage costs.
+
+![Example showing accumulated costs for a subscription](media/overview-manage-costs/all-costs.png)
+
+To narrow costs for a single service, like App Service, select **Add filter** and then select **Service name**. Then, select **App Service**.
+
+Here's an example showing costs for just App Service.
+
+![Example showing accumulated costs for ServiceName](media/overview-manage-costs/service-specific-costs.png)
+
+In the preceding example, you see the current cost for the service. Costs by Azure regions (locations) and App Service costs by resource group are also shown. From here, you can explore costs on your own.
+
+## Create budgets
+
+<!-- Note to Azure service writer: Modify the following as needed for your service. -->
+
+You can create [budgets](https://docs.microsoft.com/azure/cost-management/tutorial-acm-create-budgets?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](https://docs.microsoft.com/azure/cost-management/cost-mgt-alerts-monitor-usage-spending?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you extra money. For more information about the filter options available when you create a budget, see [Group and filter options](https://docs.microsoft.com/azure/cost-management-billing/costs/group-filter?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Export cost data
+
+You can also [export your cost data](https://docs.microsoft.com/azure/cost-management-billing/costs/tutorial-export-acm-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do more data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+
+## Next steps
+
+- Learn more on how pricing works with Azure Storage. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
+- Learn [how to optimize your cloud investment with Azure Cost Management](https://docs.microsoft.com/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](https://docs.microsoft.com/azure/cost-management-billing/costs/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](https://docs.microsoft.com/azure/cost-management-billing/manage/getting-started?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](https://docs.microsoft.com/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+
+<!-- Insert links to other articles that might help users save and manage costs for you service here.
+
+Create a table of contents entry for the article in the How-to guides section where appropriate. -->
\ No newline at end of file
app-service https://docs.microsoft.com/en-us/azure/app-service/quickstart-html https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-html.md
@@ -58,7 +58,7 @@ The `az webapp up` command does the following actions:
This command may take a few minutes to run. While running, it displays information similar to the following example:
-<pre>
+```output
{ "app_url": "https://&lt;app_name&gt;.azurewebsites.net", "location": "westeurope",
@@ -70,7 +70,7 @@ This command may take a few minutes to run. While running, it displays informati
"src_path": "/home/&lt;username&gt;/quickstart/html-docs-hello-world ", &lt; JSON data removed for brevity. &gt; }
-</pre>
+```
Make a note of the `resourceGroup` value. You need it for the [clean up resources](#clean-up-resources) section.
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-java-spring-cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-java-spring-cosmosdb.md
@@ -51,16 +51,16 @@ yes | cp -rf .prep/* .
Follow these steps to create an Azure Cosmos DB database in your subscription. The TODO list app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
-1. Login your Azure CLI, and optionally set your subscription if you have more than one connected to your login credentials.
+1. Login to your Azure CLI, and optionally set your subscription if you have more than one connected to your login credentials.
- ```bash
+ ```azurecli
az login az account set -s <your-subscription-id> ``` 2. Create an Azure Resource Group, noting the resource group name.
- ```bash
+ ```azurecli
az group create -n <your-azure-group-name> \ -l <your-resource-group-region> ```
@@ -68,7 +68,7 @@ Follow these steps to create an Azure Cosmos DB database in your subscription. T
3. Create Azure Cosmos DB with the `GlobalDocumentDB` kind. The name of Cosmos DB must use only lower case letters. Note down the `documentEndpoint` field in the response from the command.
- ```bash
+ ```azurecli
az cosmosdb create --kind GlobalDocumentDB \ -g <your-azure-group-name> \ -n <your-azure-COSMOS-DB-name-in-lower-case-letters>
@@ -76,7 +76,7 @@ The name of Cosmos DB must use only lower case letters. Note down the `documentE
4. Get your Azure Cosmos DB key to connect to the app. Keep the `primaryMasterKey`, `documentEndpoint` nearby as you'll need them in the next step.
- ```bash
+ ```azurecli
az cosmosdb list-keys -g <your-azure-group-name> -n <your-azure-COSMOSDB-name> ```
@@ -144,7 +144,7 @@ mvn package spring-boot:run
The output should look like the following.
-```bash
+```output
bash-3.2$ mvn package spring-boot:run [INFO] Scanning for projects... [INFO]
@@ -289,7 +289,7 @@ You should see the app running with the remote URL in the address bar:
Scale out the application by adding another worker:
-```bash
+```azurecli
az appservice plan update --number-of-workers 2 \ --name ${WEBAPP_PLAN_NAME} \ --resource-group <your-azure-group-name>
@@ -299,7 +299,7 @@ az appservice plan update --number-of-workers 2 \
If you don't need these resources for another tutorial (see [Next steps](#next)), you can delete them by running the following command in the Cloud Shell:    
-```bash
+```azurecli
az group delete --name <your-azure-group-name> ```
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-troubleshoot-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-troubleshoot-monitor.md
@@ -166,11 +166,11 @@ where ResultDescription contains "error"
In the `ResultDescription` column, you'll see the following error:
-<pre>
+```output
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 16384 bytes) in /home/site/wwwroot/process.php on line 20, referer: http://<app-name>.azurewebsites.net/
-</pre>
+```
### Join AppServiceHTTPLogs and AppServiceConsoleLogs
@@ -196,11 +196,11 @@ myHttp | join myConsole on TimeGen | project TimeGen, CsUriStem, ScStatus, Resul
In the `ResultDescription` column, you'll see the following error at the same time as web server errors:
-<pre>
+```output
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 16384 bytes) in /home/site/wwwroot/process.php on line 20, referer: http://<app-name>.azurewebsites.net/
-</pre>
+```
The message states memory has been exhausted on line 20 of `process.php`. You've now confirmed that the application produced an error during the HTTP 500 error. Let's take a look at the code to identify the problem.
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/key-vault-certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/key-vault-certs.md
@@ -38,7 +38,7 @@ Application Gateway integration with Key Vault requires a three-step configurati
1. **Create a user-assigned managed identity**
- You create or reuse an existing user-assigned managed identity, which Application Gateway uses to retrieve certificates from Key Vault on your behalf. For more information, see [Create, list, delete or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md). This step creates a new identity in the Azure Active Directory tenant. The identity is trusted by the subscription that's used to create the identity.
+ You create or reuse an existing user-assigned managed identity, which Application Gateway uses to retrieve certificates from Key Vault on your behalf. For more information, see [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md). This step creates a new identity in the Azure Active Directory tenant. The identity is trusted by the subscription that's used to create the identity.
1. **Configure your key vault**
attestation https://docs.microsoft.com/en-us/azure/attestation/basic-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/basic-concepts.md
@@ -33,11 +33,16 @@ Azure Attestation provides a default provider in each region. Customers can choo
| Region | Attest Uri | |--|--|
+| East US | `https://sharedeus.eus.attest.azure.net` |
+| West US | `https://sharedwus.wus.attest.azure.net` |
| UK South | `https://shareduks.uks.attest.azure.net` |
+| UK West| `https://sharedukw.ukw.attest.azure.net ` |
+| Canada East | `https://sharedcae.cae.attest.azure.net` |
+| Canada Central | `https://sharedcac.cac.attest.azure.net` |
+| North Europe | `https://sharedneu.neu.attest.azure.net` |
+| West Europe| `https://sharedweu.weu.attest.azure.net` |
| US East 2 | `https://sharedeus2.eus2.attest.azure.net` | | Central US | `https://sharedcus.cus.attest.azure.net` |
-| East US| `https://sharedeus.eus.attest.azure.net` |
-| Canada Central | `https://sharedcac.cac.attest.azure.net` |
## Attestation request
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/update-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/update-management.md
@@ -420,7 +420,7 @@ This error can occur for one of the following reasons:
When applicable, use [dynamic groups](../update-management/configure-groups.md) for your update deployments. In addition, you can take the following steps.
-1. Verify that your machine or server meets the [requirements](../update-management/overview.md#client-requirements).
+1. Verify that your machine or server meets the [requirements](../update-management/overview.md#system-requirements).
2. Verify connectivity to the Hybrid Runbook Worker using the Hybrid Runbook Worker agent troubleshooter. To learn more about the troubleshooter, see [Troubleshoot update agent issues](update-agent-issues.md). ## <a name="updates-nodeployment"></a>Scenario: Updates are installed without a deployment
automation https://docs.microsoft.com/en-us/azure/automation/update-management/enable-from-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-template.md
@@ -310,7 +310,7 @@ If you're new to Azure Automation and Azure Monitor, it's important that you und
**Azure CLI**
- ```cli
+ ```azurecli
az deployment group create --resource-group <my-resource-group> --name <my-deployment-name> --template-file deployUMSolutiontemplate.json ```
automation https://docs.microsoft.com/en-us/azure/automation/update-management/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
@@ -3,7 +3,7 @@ title: Azure Automation Update Management overview
description: This article provides an overview of the Update Management feature that implements updates for your Windows and Linux machines. services: automation ms.subservice: update-management
-ms.date: 12/09/2020
+ms.date: 01/13/2021
ms.topic: conceptual --- # Update Management overview
@@ -59,16 +59,16 @@ Having a machine registered for Update Management in more than one Log Analytics
## Clients
-### Supported client types
+### Supported operating systems
-The following table lists the supported operating systems for update assessments and patching. Patching requires a Hybrid Runbook Worker, which is automatically installed when you enable the virtual machine or server for management by Update Management. For information on Hybrid Runbook Worker system requirements, see [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md) and a [Deploy a Linux Hybrid Runbook Worker](../automation-linux-hrw-install.md).
+The following table lists the supported operating systems for update assessments and patching. Patching requires a system Hybrid Runbook Worker, which is automatically installed when you enable the virtual machine or server for management by Update Management. For information on Hybrid Runbook Worker system requirements, see [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md) and a [Deploy a Linux Hybrid Runbook Worker](../automation-linux-hrw-install.md).
> [!NOTE] > Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings). |Operating system |Notes | |---------|---------|
-|Windows Server 2019 (Datacenter/Datacenter Core/Standard)<br><br>Windows Server 2016 (Datacenter/Datacenter Core/Standard)<br><br>Windows Server 2012 R2(Datacenter/Standard)<br><br>Windows Server 2012 ||
+|Windows Server 2019 (Datacenter/Datacenter Core/Standard)<br>Windows Server 2016 (Datacenter/Datacenter Core/Standard)<br>Windows Server 2012 R2(Datacenter/Standard)<br>Windows Server 2012 |
|Windows Server 2008 R2 (RTM and SP1 Standard)| Update Management supports assessments and patching for this operating system. The [Hybrid Runbook Worker](../automation-windows-hrw-install.md) is supported for Windows Server 2008 R2. | |CentOS 6 and 7 (x64) | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). | |Red Hat Enterprise 6 and 7 (x64) | Linux agents require access to an update repository. |
@@ -78,9 +78,9 @@ The following table lists the supported operating systems for update assessments
> [!NOTE] > Azure virtual machine scale sets can be managed through Update Management. Update Management works on the instances themselves and not on the base image. You'll need to schedule the updates in an incremental way, so that not all the VM instances are updated at once. You can add nodes for virtual machine scale sets by following the steps under [Add a non-Azure machine to Change Tracking and Inventory](../automation-tutorial-installed-software.md#add-a-non-azure-machine-to-change-tracking-and-inventory).
-### Unsupported client types
+### Unsupported operating systems
-The following table lists unsupported operating systems:
+The following table lists operating systems not supported by Update Management:
|Operating system |Notes | |---------|---------|
@@ -88,15 +88,20 @@ The following table lists unsupported operating systems:
|Windows Server 2016 Nano Server | Not supported. | |Azure Kubernetes Service Nodes | Not supported. Use the patching process described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](../../aks/node-updates-kured.md)|
-### Client requirements
+### System requirements
-The following information describes operating system-specific client requirements. For additional guidance, see [Network planning](#ports). To understand client requirements for TLS 1.2, see [TLS 1.2 enforcement for Azure Automation](../automation-managing-data.md#tls-12-enforcement-for-azure-automation).
+The following information describes operating system-specific requirements. For additional guidance, see [Network planning](#ports). To understand requirements for TLS 1.2, see [TLS 1.2 enforcement for Azure Automation](../automation-managing-data.md#tls-12-enforcement-for-azure-automation).
#### Windows
+Software Requirements:
+
+- .NET Framework 4.6 or later is required. ([Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
+- Windows PowerShell 5.1 is required ([Download Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).)
+ Windows agents must be configured to communicate with a WSUS server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
-You can use Update Management with Microsoft Endpoint Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](../../azure-monitor/platform/agent-windows.md) is required for Windows servers managed by sites in your Configuration Manager environment.
+You can use Update Management with Microsoft Endpoint Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](../../azure-monitor/platform/agent-windows.md) is required for Windows servers managed by sites in your Configuration Manager environment.
By default, Windows VMs that are deployed from Azure Marketplace are set to receive automatic updates from Windows Update Service. This behavior doesn't change when you add Windows VMs to your workspace. If you don't actively manage updates by using Update Management, the default behavior (to automatically apply updates) applies.
@@ -105,7 +110,11 @@ By default, Windows VMs that are deployed from Azure Marketplace are set to rece
#### Linux
-For Linux, the machine requires access to an update repository, either private or public. TLS 1.1 or TLS 1.2 is required to interact with Update Management. Update Management doesn't support a Log Analytics agent for Linux that's configured to report to more than one Log Analytics workspace. The machine must also have Python 2.x installed.
+Software Requirements:
+
+- The machine requires access to an update repository, either private or public.
+- TLS 1.1 or TLS 1.2 is required to interact with Update Management.
+- Python 2.x installed.
> [!NOTE] > Update assessment of Linux machines is only supported in certain regions. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
@@ -124,11 +133,11 @@ Update Management uses the resources described in this section. These resources
### Hybrid Runbook Worker groups
-After you enable Update Management, any Windows machine that's directly connected to your Log Analytics workspace is automatically configured as a Hybrid Runbook Worker to support the runbooks that support Update Management.
+After you enable Update Management, any Windows machine that's directly connected to your Log Analytics workspace is automatically configured as a system Hybrid Runbook Worker to support the runbooks that support Update Management.
Each Windows machine that's managed by Update Management is listed in the Hybrid worker groups pane as a System hybrid worker group for the Automation account. The groups use the `Hostname FQDN_GUID` naming convention. You can't target these groups with runbooks in your account. If you try, the attempt fails. These groups are intended to support only Update Management. To learn more about viewing the list of Windows machines configured as a Hybrid Runbook Worker, see [view Hybrid Runbook Workers](../automation-hybrid-runbook-worker.md#view-system-hybrid-runbook-workers).
-You can add the Windows machine to a Hybrid Runbook Worker group in your Automation account to support Automation runbooks if you use the same account for Update Management and the Hybrid Runbook Worker group membership. This functionality was added in version 7.2.12024.0 of the Hybrid Runbook Worker.
+You can add the Windows machine to a user Hybrid Runbook Worker group in your Automation account to support Automation runbooks if you use the same account for Update Management and the Hybrid Runbook Worker group membership. This functionality was added in version 7.2.12024.0 of the Hybrid Runbook Worker.
### Management packs
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/howto-integrate-azure-managed-service-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
@@ -76,7 +76,7 @@ To set up a managed identity in the portal, you first create an application and
1. Add a reference to the *Azure.Identity* package:
- ```cli
+ ```bash
dotnet add package Azure.Identity ```
@@ -207,7 +207,7 @@ git add .
git commit -m "Initial version" ```
-To enable local Git deployment for your app with the Kudu build server, run [`az webapp deployment source config-local-git`](/cli/azure/webapp/deployment/source?view=azure-cli-latest#az-webapp-deployment-source-config-local-git) in Cloud Shell.
+To enable local Git deployment for your app with the Kudu build server, run [`az webapp deployment source config-local-git`](/cli/azure/webapp/deployment/#az-webapp-deployment-source-config-local-git) in Cloud Shell.
```azurecli-interactive az webapp deployment source config-local-git --name <app_name> --resource-group <group_name>
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/connect-cluster.md
@@ -31,19 +31,19 @@ Verify you have the following requirements ready:
Install the `connectedk8s` extension, which helps you connect Kubernetes clusters to Azure:
- ```console
+ ```azurecli
az extension add --name connectedk8s ``` Install the `k8sconfiguration` extension:
- ```console
+ ```azurecli
az extension add --name k8sconfiguration ``` If you want to update these extensions later, run the following commands:
- ```console
+ ```azurecli
az extension update --name connectedk8s az extension update --name k8sconfiguration ```
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/create-onboarding-service-principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/create-onboarding-service-principal.md
@@ -78,7 +78,7 @@ az role assignment create \
Reference the newly created Service Principal:
-```console
+```azurecli
az login --service-principal -u mySpnClientId -p mySpnClientSecret --tenant myTenantID az connectedk8s connect -n myConnectedClusterName -g myResoureGroupName ```
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-go-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-go-get-started.md
@@ -59,7 +59,9 @@ if err != nil {
If the connection is successful, [HTTP handlers](https://golang.org/pkg/net/http/#HandleFunc) are configured to handle `POST` and `GET` operations and the HTTP server is started.
+> [!NOTE]
> [gorilla mux library](https://github.com/gorilla/mux) is used for routing (although it's not strictly necessary and we could have gotten away by using the standard library for this sample application).
+>
```go uh := userHandler{client: client}
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/change-analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis.md
@@ -192,6 +192,28 @@ If it's the first time you view Change history after its integration with Applic
- **Failed to query Microsoft.ChangeAnalysis resource provider** with message *Azure lighthouse subscription is not supported, the changes are only available in the subscription's home tenant*. There is a limitation right now for Change Analysis resource provider to be registered through Azure Lighthouse subscription for users not in home tenant. We expect this limitation to be addressed in the near future. If this is a blocking issue for you, there is a workaround that involves creating a service principal and explicitly assigning the role to allow the access. Contact changeanalysishelp@microsoft.com to learn more about it.
+### An error occurred while getting changes. Please refresh this page or come back later to view changes
+
+This is the general error message presented by Application Change Analysis service when changes could not be loaded. A few known causes are:
+- Internet connectivity error from the client device
+- Change Analysis service being temporarily unavailable
+Refreshing the page after a few minutes usually fixes this issue. If the error persists, contact changeanalysishelp@micorosoft.com
+
+### You don't have enough permissions to view some changes. Contact your Azure subscription administrator
+
+This is the general unauthorized error message, explaining the current user does not have sufficient permissions to view the change. At least reader access is required on the resource to view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager. For web app in-guest file changes and configuration changes, at least contributor role is required.
+
+### Failed to register Microsoft.ChangeAnalysis resource provider
+
+**You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider. Contact your Azure subscription administrator.** This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker. i.e. view access to a resource group. To fix this, You can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.
+
+Register resource provider through PowerShell:
+
+```PowerShell
+# Register resource provider
+Register-AzResourceProvider -ProviderNamespace "Microsoft.ChangeAnalysis"
+```
+ ## Next steps - Enable Application Insights for [Azure App Services apps](azure-web-apps.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
@@ -243,6 +243,35 @@ To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
} ```
+## Suppressing specific auto-collected telemetry
+
+Starting from version 3.0.1-BETA.2, specific auto-collected telemetry can be suppressed using these configuration options:
+
+```json
+{
+ "instrumentation": {
+ "cassandra": {
+ "enabled": false
+ },
+ "jdbc": {
+ "enabled": false
+ },
+ "kafka": {
+ "enabled": false
+ },
+ "micrometer": {
+ "enabled": false
+ },
+ "mongo": {
+ "enabled": false
+ },
+ "redis": {
+ "enabled": false
+ }
+ }
+}
+```
+ ## Heartbeat By default, Application Insights Java 3.0 sends a heartbeat metric once every 15 minutes. If you are using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-telemetry-processors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors.md
@@ -18,58 +18,49 @@ Java 3.0 Agent for Application Insights now has the capabilities to process tele
The following are some use cases of telemetry processors: * Mask sensitive data * Conditionally add custom dimensions
- * Update the telemetry name used for aggregation and display
- * Drop or filter span attributes to control ingestion cost
+ * Update the name that is used for aggregation and display in the Azure portal
+ * Drop span attributes to control ingestion cost
## Terminology
-Before we jump into telemetry processors, it is important to understand what are traces and spans.
+Before we jump into telemetry processors, it is important to understand what the term span refers to.
-### Traces
+A span is a general term for any of these three things:
-Traces track the progression of a single request, called a `trace`, as it is handled by services that make up an application. The request may be initiated by a user or an application. Each unit of work in a `trace` is called a `span`; a `trace` is a tree of spans. A `trace` is comprised of the single root span and any number of child spans.
+* An incoming request
+* An outgoing dependency (e.g. a remote call to another service)
+* An in-process dependency (e.g. work being done by sub-components of the service)
-### Span
+For the purpose of telemetry processors, the important components of a span are:
-Spans are objects that represent the work being done by individual services or components involved in a request as it flows through a system. A `span` contains a `span context`, which is a set of globally unique identifiers that represent the unique request that each span is a part of.
+* Name
+* Attributes
-Spans encapsulate:
+The span name is the primary display used for requests and dependencies in the Azure portal.
-* The span name
-* An immutable `SpanContext` that uniquely identifies the Span
-* A parent span in the form of a `Span`, `SpanContext`, or null
-* A `SpanKind`
-* A start timestamp
-* An end timestamp
-* [`Attributes`](#attributes)
-* A list of timestamped Events
-* A `Status`.
+The span attributes represent both standard and custom properties of a given request or dependency.
-Generally, the lifecycle of a span resembles the following:
+## Telemetry processor types
-* A request is received by a service. The span context is extracted from the request headers, if it exists.
-* A new span is created as a child of the extracted span context; if none exists, a new root span is created.
-* The service handles the request. Additional attributes and events are added to the span that are useful for understanding the context of the request, such as the hostname of the machine handling the request, or customer identifiers.
-* New spans may be created to represent work being done by sub-components of the service.
-* When the service makes a remote call to another service, the current span context is serialized and forwarded to the next service by injecting the span context into the headers or message envelope.
-* The work being done by the service completes, successfully or not. The span status is appropriately set, and the span is marked finished.
+There are currently two types of telemetry processors.
-### Attributes
+#### Attribute processor
-`Attributes` are a list of zero or more key-value pairs which are encapsulated in a `span`. An Attribute MUST have the following properties:
+An attribute processor has the ability to insert, update, delete, or hash attributes.
+It can also extract (via a regular expression) one or more new attributes from an existing attribute.
-The attribute key, which MUST be a non-null and non-empty string.
-The attribute value, which is either:
-* A primitive type: string, boolean, double precision floating point (IEEE 754-1985) or signed 64 bit integer.
-* An array of primitive type values. The array MUST be homogeneous, i.e. it MUST NOT contain values of different types. For protocols that do not natively support array values such values SHOULD be represented as JSON strings.
+#### Span processor
-## Supported processors:
- * Attribute Processor
- * Span Processor
+A span processor has the ability to update the telemetry name.
+It can also extract (via a regular expression) one or more new attributes from the span name.
-## To get started
+> [!NOTE]
+> Note that currently telemetry processors only process attributes of type string,
+and do not process attributes of type boolean or number.
+
+## Getting started
-Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-***.jar`, with the following template.
+Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-*.jar`, with the following template.
```json {
@@ -93,9 +84,15 @@ Create a configuration file named `applicationinsights.json`, and place it in th
} ```
-## Include/Exclude spans
+## Include/exclude criteria
-The attribute processor and the span processor expose the option to provide a set of properties of a span to match against, to determine if the span should be included or excluded from the telemetry processor. To configure this option, under `include` and/or `exclude` at least one `matchType` and one of `spanNames` or `attributes` is required. The include/exclude configuration is supported to have more than one specified condition. All of the specified conditions must evaluate to true for a match to occur.
+Both attribute processors and span processors support optional `include` and `exclude` criteria.
+A processor will only be applied to those spans that match its `include` criteria (if provided)
+_and_ do not match its `exclude` criteria (if provided).
+
+To configure this option, under `include` and/or `exclude` at least one `matchType` and one of `spanNames` or `attributes` is required.
+The include/exclude configuration is supported to have more than one specified condition.
+All of the specified conditions must evaluate to true for a match to occur.
**Required field**: * `matchType` controls how items in `spanNames` and `attributes` arrays are interpreted. Possible values are `regexp` or `strict`.
@@ -145,7 +142,7 @@ The attribute processor and the span processor expose the option to provide a se
``` For more understanding, check out the [telemetry processor examples](./java-standalone-telemetry-processors-examples.md) documentation.
-## Attribute processor
+## Attribute processor
The attributes processor modifies attributes of a span. It optionally supports the ability to include/exclude spans. It takes a list of actions which are performed in order specified in the configuration file. The supported actions are:
@@ -162,7 +159,7 @@ Inserts a new attribute in spans where the key does not already exist.
"key": "attribute1", "value": "value1", "action": "insert"
- },
+ }
] } ]
@@ -185,7 +182,7 @@ Updates an attribute in spans where the key does exist
"key": "attribute1", "value": "newValue", "action": "update"
- },
+ }
] } ]
@@ -208,7 +205,7 @@ Deletes an attribute from a span
{ "key": "attribute1", "action": "delete"
- },
+ }
] } ]
@@ -229,7 +226,7 @@ Hashes (SHA1) an existing attribute value
{ "key": "attribute1", "action": "hash"
- },
+ }
] } ]
@@ -254,7 +251,7 @@ Extracts values using a regular expression rule from the input key to target key
"key": "attribute1", "pattern": "<regular pattern with named matchers>", "action": "extract"
- },
+ }
] } ]
@@ -266,7 +263,7 @@ For the `extract` action, following are required
For more understanding, check out the [telemetry processor examples](./java-standalone-telemetry-processors-examples.md) documentation.
-## Span processors
+## Span processor
The span processor modifies either the span name or attributes of a span based on the span name. It optionally supports the ability to include/exclude spans.
@@ -346,4 +343,4 @@ Following are list of some common span attributes that can be used in the teleme
| `db.connection_string` | string | The connection string used to connect to the database. It is recommended to remove embedded credentials.| | `db.user` | string | Username for accessing the database. | | `db.name` | string | This attribute is used to report the name of the database being accessed. For commands that switch the database, this should be set to the target database (even if the command fails).|
-| `db.statement` | string | The database statement being executed.|
\ No newline at end of file
+| `db.statement` | string | The database statement being executed.|
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/containers.md
@@ -3,8 +3,8 @@ title: Container Monitoring solution in Azure Monitor | Microsoft Docs
description: The Container Monitoring solution in Azure Monitor helps you view and manage your Docker and Windows container hosts in a single location. ms.subservice: logs ms.topic: conceptual
-author: mgoedtel
-ms.author: magoedte
+author: bwren
+ms.author: bwren
ms.date: 07/06/2020 ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-metric-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-metric-overview.md
@@ -1,7 +1,7 @@
--- title: Understand how metric alerts work in Azure Monitor. description: Get an overview of what you can do with metric alerts and how they work in Azure Monitor.
-ms.date: 01/11/2021
+ms.date: 01/13/2021
ms.topic: conceptual ms.subservice: alerts
@@ -61,6 +61,10 @@ After some time, the usage on "myVM" comes back down to normal (goes below the t
As the resolved notification is sent out via web hooks or email, the status of the alert instance (called monitor state) in Azure portal is also set to resolved.
+> [!NOTE]
+>
+> When an alert rule monitors multiple conditions, a fired alert will be resolved if at least one of the conditions is no longer met for three consecutive periods.
+ ### Using dimensions Metric alerts in Azure Monitor also support monitoring multiple dimensions value combinations with one rule. Let's understand why you might use multiple dimension combinations with the help of an example.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsm-connector-secure-webhook-connections-azure-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsm-connector-secure-webhook-connections-azure-configuration.md
@@ -35,9 +35,9 @@ Follow these steps to register the application with Azure AD:
## Define service principal
-The Action Group service will need permission to acquire authentication tokens from your AAD application in order to authentication with Service now. To grant those permissions, you will need to create a service principal for the Action Group service in your tenant.
-You can use this [PowerShell commands](./action-groups.md#secure-webhook-powershell-script) for this purpose. (Requires tenant admin privileges).
-As an optional step you can define application role in the created appΓÇÖs manifest, which can allow you to further restrict, access in a way that only certain applications with that specific role can send messages. This role has to be then assigned to the Action Group service principal. \
+The Action Group service is a first party application therefore it has permission to acquire authentication tokens from your AAD application in order to authentication with Service now.
+As an optional step you can define application role in the created appΓÇÖs manifest, which can allow you to further restrict, access in a way that only certain applications with that specific role can send messages. This role has to be then assigned to the Action Group service principal (Requires tenant admin privileges).
+ This step can be done through the same [PowerShell commands](./action-groups.md#secure-webhook-powershell-script). ## Create a Secure Webhook action group
azure-portal https://docs.microsoft.com/en-us/azure/azure-portal/supportability/sku-series-unavailable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/sku-series-unavailable.md deleted file mode 100644
@@ -1,80 +0,0 @@
-title: Region or SKU series unavailable
-description: Some SKU series are unavailable for the selected subscription for this region, which may require subscription management support request.
-author: stevendotwang
-ms.topic: troubleshooting
-ms.date: 01/27/2020
-ms.author: xingwan
-
-# Region or SKU unavailable
-
-This article describes how to resolve the issue of an Azure subscription not having access to a region or a VM SKU.
-
-## Symptoms
-
-When deploying a virtual machine, you receive one of the following error messages:
-
-```
-Code: SkuNotAvailable
-Message: The requested size for resource '<resource>' is currently not available in location
-'<location>' zones '<zone>' for subscription '<subscriptionID>'. Please try another size or
-deploy to a different location or zones. See https://aka.ms/azureskunotavailable for details.
-```
-
-```
-Message: Your subscription doesnΓÇÖt support virtual machine creation in <location>. Choose a
-different location. Supported locations are <list of locations>
-```
-
-```
-Code: NotAvailableForSubscription
-Message: This size is currently unavailable in this location for this subscription
-```
-
-When purchasing Reserved Virtual Machine Instances, you receive one of the following error messages:
-
-```
-Message: Your subscription doesnΓÇÖt support virtual machine reservation in <location>. Choose a
-different location. Supported locations are: <list of locations>
-```
-
-```
-Message: This size is currently unavailable in this location for this subscription
-```
-
-When creating a support request to increase compute core quota, a region or a SKU family isn't available for selection.
-
-## Solution
-
-We first recommend that you consider an alternative region or SKU that meets your business needs.
-
-If you're unable to find a suitable region or SKU, create a **Subscription management** [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) following these steps:
-
-1. From the [Azure portal](https://portal.azure.com) menu, select **Help + support**. Then select **New support request**.
-
-1. In **Basics**, for **Issue type**, select **Subscription management**.
-
-1. Select a **Subscription** and enter a brief description in **Summary**.
-
- ![Basics tab of New support request](./media/SKU-series-unavailable/support-request-basics.png)
-
-1. For **Problem type**, choose **Select problem type**.
-
-1. For **Select problem type**, choose an option, for instance, **Unable to access my subscription or resource** > **My issue is not listed above**. Select **Save**.
-
- ![Specify a problem for the request](./media/SKU-series-unavailable/support-request-select-problem-type.png)
-
-1. Select **Next: Solutions** to explore possible solutions. If necessary, select **Next: Details** to continue.
-
-1. Enter any additional information you can provide, along with your contact information.
-
-1. Select **Review + create**. After you verify your information, select **Create** to create the request.
-
-## Send us your suggestions
-
-We're always open to feedback and suggestions! Send us your [suggestions](https://feedback.azure.com/forums/266794-support-feedback). Additionally, you can engage with us on [Twitter](https://twitter.com/azuresupport) or the [Microsoft Q&A question page](/answers/products/azure).
-
-## Learn more
-
-[Azure Support FAQ](https://azure.microsoft.com/support/faq)
\ No newline at end of file
azure-relay https://docs.microsoft.com/en-us/azure/azure-relay/relay-metrics-azure-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-metrics-azure-monitor.md
@@ -26,7 +26,7 @@ You can monitor metrics over time in the [Azure portal](https://portal.azure.com
![A page titled "Monitor - Metrics (preview)" shows a line graph of memory usage for the last 30 days.][1]
-You can also access metrics directly via the namespace. To do so, select your namespace and then click **Metrics **.
+You can also access metrics directly via the namespace. To do so, select your namespace and then click **Metrics**.
For metrics supporting dimensions, you must filter with the desired dimension value.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-management-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-management-group.md
@@ -2,7 +2,7 @@
title: Deploy resources to management group description: Describes how to deploy resources at the management group scope in an Azure Resource Manager template. ms.topic: conceptual
-ms.date: 11/24/2020
+ms.date: 01/13/2021
--- # Management group deployments with ARM templates
@@ -40,6 +40,8 @@ For managing your resources, use:
* [tags](/azure/templates/microsoft.resources/tags)
+Management groups are tenant-level resources. However, you can create management groups in a management group deployment by setting the scope of the new management group to the tenant. See [Management group](#management-group).
+ ## Schema The schema you use for management group deployments is different than the schema for resource group deployments.
@@ -118,7 +120,8 @@ When deploying to a management group, you can deploy resources to:
* subscriptions in the management group * resource groups in the management group * the tenant for the resource group
-* [extension resources](scope-extension-resources.md) can be applied to resources
+
+An [extension resource](scope-extension-resources.md) can be scoped to a target that is different than the deployment target.
The user deploying the template must have access to the specified scope.
@@ -162,10 +165,56 @@ You can use a nested deployment with `scope` and `location` set.
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/scope/management-group-to-tenant.json" highlight="9,10,14":::
-Or, you can set the scope to `/` for some resource types, like management groups.
+Or, you can set the scope to `/` for some resource types, like management groups. Creating a new management group is described in the next section.
+
+## Management group
+
+To create a management group in a management group deployment, you must set the scope to `/` for the management group.
+
+The following example creates a new management group in the root management group.
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/scope/management-group-create-mg.json" highlight="12,15":::
+The next example creates a new management group in the management group specified as the parent. Notice that the scope is set to `/`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "mgName": {
+ "type": "string",
+ "defaultValue": "[concat('mg-', uniqueString(newGuid()))]"
+ },
+ "parentMG": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "name": "[parameters('mgName')]",
+ "type": "Microsoft.Management/managementGroups",
+ "apiVersion": "2020-05-01",
+ "scope": "/",
+ "location": "eastus",
+ "properties": {
+ "details": {
+ "parent": {
+ "id": "[tenantResourceId('Microsoft.Management/managementGroups', parameters('parentMG'))]"
+ }
+ }
+ }
+ }
+ ],
+ "outputs": {
+ "output": {
+ "type": "string",
+ "value": "[parameters('mgName')]"
+ }
+ }
+}
+```
+ ## Azure Policy Custom policy definitions that are deployed to the management group are extensions of the management group. To get the ID of a custom policy definition, use the [extensionResourceId()](template-functions-resource.md#extensionresourceid) function. Built-in policy definitions are tenant level resources. To get the ID of a built-in policy definition, use the [tenantResourceId](template-functions-resource.md#tenantresourceid) function.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-resource-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-resource-group.md
@@ -2,7 +2,7 @@
title: Deploy resources to resource groups description: Describes how to deploy resources in an Azure Resource Manager template. It shows how to target more than one resource group. ms.topic: conceptual
-ms.date: 11/24/2020
+ms.date: 01/13/2021
--- # Resource group deployments with ARM templates
@@ -80,7 +80,8 @@ When deploying to a resource group, you can deploy resources to:
* other resource groups in the same subscription or other subscriptions * any subscription in the tenant * the tenant for the resource group
-* [extension resources](scope-extension-resources.md) can be applied to resources
+
+An [extension resource](scope-extension-resources.md) can be scoped to a target that is different than the deployment target.
The user deploying the template must have access to the specified scope.
@@ -132,6 +133,8 @@ Or, you can set the scope to `/` for some resource types, like management groups
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/scope/resource-group-create-mg.json" highlight="12,15":::
+For more information, see [Management group](deploy-to-management-group.md#management-group).
+ ## Deploy to target resource group To deploy resources in the target resource group, define those resources in the **resources** section of the template. The following template creates a storage account in the resource group that is specified in the deployment operation.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-subscription.md
@@ -2,7 +2,7 @@
title: Deploy resources to subscription description: Describes how to create a resource group in an Azure Resource Manager template. It also shows how to deploy resources at the Azure subscription scope. ms.topic: conceptual
-ms.date: 11/24/2020
+ms.date: 01/13/2021
--- # Subscription deployments with ARM templates
@@ -137,7 +137,8 @@ When deploying to a subscription, you can deploy resources to:
* any subscription in the tenant * resource groups within the subscription or other subscriptions * the tenant for the subscription
-* [extension resources](scope-extension-resources.md) can be applied to resources
+
+An [extension resource](scope-extension-resources.md) can be scoped to a target that is different than the deployment target.
The user deploying the template must have access to the specified scope.
@@ -177,6 +178,8 @@ Or, you can set the scope to `/` for some resource types, like management groups
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/scope/subscription-create-mg.json" highlight="12,15":::
+For more information, see [Management group](deploy-to-management-group.md#management-group).
+ ## Resource groups ### Create resource groups
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-tenant.md
@@ -2,7 +2,7 @@
title: Deploy resources to tenant description: Describes how to deploy resources at the tenant scope in an Azure Resource Manager template. ms.topic: conceptual
-ms.date: 11/24/2020
+ms.date: 01/13/2021
--- # Tenant deployments with ARM templates
@@ -140,7 +140,8 @@ When deploying to a tenant, you can deploy resources to:
* management groups within the tenant * subscriptions * resource groups
-* [extension resources](scope-extension-resources.md) can be applied to resources
+
+An [extension resource](scope-extension-resources.md) can be scoped to a target that is different than the deployment target.
The user deploying the template must have access to the specified scope.
@@ -180,6 +181,8 @@ The following template creates a management group.
:::code language="json" source="~/quickstart-templates/tenant-deployments/new-mg/azuredeploy.json":::
+If your account doesn't have permission to deploy to the tenant, you can still create management groups by deploying to another scope. For more information, see [Management group](deploy-to-management-group.md#management-group).
+ ## Assign role The following template assigns a role at the tenant scope.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-script-template-configure-dev https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-script-template-configure-dev.md
@@ -12,11 +12,13 @@ ms.author: jgao
# Configure development environment for deployment scripts in ARM templates
-Learn how to create a development environment for developing and testing deployment scripts with a deployment script image. You can either create [Azure container instance](../../container-instances/container-instances-overview.md) or use [Docker](https://docs.docker.com/get-docker/). Both are covered in this article.
+Learn how to create a development environment for developing and testing ARM template deployment scripts with a deployment script image. You can either create an [Azure container instance](../../container-instances/container-instances-overview.md) or use [Docker](https://docs.docker.com/get-docker/). Both options are covered in this article.
## Prerequisites
-If you don't have a deployment script, you can create a _hello.ps1_ file with the following content:
+### Azure PowerShell container
+
+If you don't have an Azure PowerShell deployment script, you can create a *hello.ps1* file by using the following content:
```powershell param([string] $name)
@@ -26,14 +28,29 @@ $DeploymentScriptOutputs = @{}
$DeploymentScriptOutputs['text'] = $output ```
-## Use Azure container instance
+### Azure CLI container
+
+For an Azure CLI container image, you can create a *hello.sh* file by using the following content:
+
+```bash
+firstname=$1
+lastname=$2
+output="{\"name\":{\"displayName\":\"$firstname $lastname\",\"firstName\":\"$firstname\",\"lastName\":\"$lastname\"}}"
+echo -n "Hello "
+echo $output | jq -r '.name.displayName'
+```
+
+> [!NOTE]
+> When you run an Azure CLI deployment script, an environment variable called `AZ_SCRIPTS_OUTPUT_PATH` stores the location of the script output file. The environment variable isn't available in the development environment container. For more information about working with Azure CLI outputs, see [Work with outputs from CLI script](deployment-script-template.md#work-with-outputs-from-cli-script).
+
+## Use Azure PowerShell container instance
To author your scripts on your computer, you need to create a storage account and mount the storage account to the container instance. So that you can upload your script to the storage account and run the script on the container instance. > [!NOTE] > The storage account that you create to test your script is not the same storage account that the deployment script service uses to execute the script. Deployment script service creates a unique name as a file share on every execution.
-### Create an Azure container instance
+### Create an Azure PowerShell container instance
The following Azure Resource Manager template (ARM template) creates a container instance and a file share, and then mounts the file share to the container image.
@@ -50,21 +67,21 @@ The following Azure Resource Manager template (ARM template) creates a container
}, "containerImage": { "type": "string",
- "defaultValue": "mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3",
+ "defaultValue": "mcr.microsoft.com/azuredeploymentscripts-powershell:az5.2",
"metadata": { "description": "Specify the container image." } }, "mountPath": { "type": "string",
- "defaultValue": "deploymentScript",
+ "defaultValue": "/mnt/azscripts/azscriptinput",
"metadata": { "description": "Specify the mount path." } } }, "variables": {
- "storageAccountName": "[concat(parameters('projectName'), 'store')]",
+ "storageAccountName": "[tolower(concat(parameters('projectName'), 'store'))]",
"fileShareName": "[concat(parameters('projectName'), 'share')]", "containerGroupName": "[concat(parameters('projectName'), 'cg')]", "containerName": "[concat(parameters('projectName'), 'container')]"
@@ -150,14 +167,195 @@ The following Azure Resource Manager template (ARM template) creates a container
} ```
-The default value for the mount path is `deploymentScript`. This is the path in the container instance where it is mounted to the file share.
+The default value for the mount path is `/mnt/azscripts/azscriptinput`. This is the path in the container instance where it's mounted to the file share.
+
+The default container image specified in the template is **mcr.microsoft.com/azuredeploymentscripts-powershell:az5.2**. See a list of all [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list).
+
+The template suspends the container instance after 1,800 seconds. You have 30 minutes before the container instance goes into a terminated state and the session ends.
+
+To deploy the template:
+
+```azurepowershell
+$projectName = Read-Host -Prompt "Enter a project name that is used to generate resource names"
+$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
+$templateFile = Read-Host -Prompt "Enter the template file path and file name"
+$resourceGroupName = "${projectName}rg"
+
+New-AzResourceGroup -Location $location -name $resourceGroupName
+New-AzResourceGroupDeployment -resourceGroupName $resourceGroupName -TemplateFile $templatefile -projectName $projectName
+```
+
+### Upload the deployment script
+
+Upload your deployment script to the storage account. Here's an example of a PowerShell script:
+
+```azurepowershell
+$projectName = Read-Host -Prompt "Enter the same project name that you used earlier"
+$fileName = Read-Host -Prompt "Enter the deployment script file name with the path"
+
+$resourceGroupName = "${projectName}rg"
+$storageAccountName = "${projectName}store"
+$fileShareName = "${projectName}share"
+
+$context = (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).Context
+Set-AzStorageFileContent -Context $context -ShareName $fileShareName -Source $fileName -Force
+```
+
+You also can upload the file by using the Azure portal or the Azure CLI.
+
+### Test the deployment script
+
+1. In the Azure portal, open the resource group where you deployed the container instance and the storage account.
+2. Open the container group. The default container group name is the project name appended with *cg*. The container instance is in the **Running** state.
+3. In the resource menu, select **Containers**. The container instance name is the project name appended with *container*.
+
+ ![Screenshot of the deployment script connect container instance in the Azure portal.](./media/deployment-script-template-configure-dev/deployment-script-container-instance-connect.png)
+
+4. Select **Connect**, and then select **Connect**. If you can't connect to the container instance, restart the container group and try again.
+5. In the console pane, run the following commands:
+
+ ```console
+ cd /mnt/azscripts/azscriptinput
+ ls
+ pwsh ./hello.ps1 "John Dole"
+ ```
+
+ The output is **Hello John Dole**.
+
+ ![Screenshot of the deployment script connect container instance test output in the console.](./media/deployment-script-template-configure-dev/deployment-script-container-instance-test.png)
+
+## Use an Azure CLI container instance
+
+To author your scripts on your computer, create a storage account and mount the storage account to the container instance. Then, you can upload your script to the storage account and run the script on the container instance.
+
+> [!NOTE]
+> The storage account that you create to test your script isn't the same storage account that the deployment script service uses to execute the script. The deployment script service creates a unique name as a file share on every execution.
+
+### Create an Azure CLI container instance
+
+The following ARM template creates a container instance and a file share, and then mounts the file share to the container image:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "projectName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify a project name that is used for generating resource names."
+ }
+ },
+ "containerImage": {
+ "type": "string",
+ "defaultValue": "mcr.microsoft.com/azure-cli:2.9.1",
+ "metadata": {
+ "description": "Specify the container image."
+ }
+ },
+ "mountPath": {
+ "type": "string",
+ "defaultValue": "/mnt/azscripts/azscriptinput",
+ "metadata": {
+ "description": "Specify the mount path."
+ }
+ }
+ },
+ "variables": {
+ "storageAccountName": "[tolower(concat(parameters('projectName'), 'store'))]",
+ "fileShareName": "[concat(parameters('projectName'), 'share')]",
+ "containerGroupName": "[concat(parameters('projectName'), 'cg')]",
+ "containerName": "[concat(parameters('projectName'), 'container')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-06-01",
+ "name": "[variables('storageAccountName')]",
+ "location": "[resourceGroup().location]",
+ "sku": {
+ "name": "Standard_LRS",
+ "tier": "Standard"
+ },
+ "kind": "StorageV2",
+ "properties": {
+ "accessTier": "Hot"
+ }
+ },
+ {
+ "type": "Microsoft.Storage/storageAccounts/fileServices/shares",
+ "apiVersion": "2019-06-01",
+ "name": "[concat(variables('storageAccountName'), '/default/', variables('fileShareName'))]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.ContainerInstance/containerGroups",
+ "apiVersion": "2019-12-01",
+ "name": "[variables('containerGroupName')]",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ],
+ "properties": {
+ "containers": [
+ {
+ "name": "[variables('containerName')]",
+ "properties": {
+ "image": "[parameters('containerImage')]",
+ "resources": {
+ "requests": {
+ "cpu": 1,
+ "memoryInGb": 1.5
+ }
+ },
+ "ports": [
+ {
+ "protocol": "TCP",
+ "port": 80
+ }
+ ],
+ "volumeMounts": [
+ {
+ "name": "filesharevolume",
+ "mountPath": "[parameters('mountPath')]"
+ }
+ ],
+ "command": [
+ "/bin/bash",
+ "-c",
+ "echo hello; sleep 1800"
+ ]
+ }
+ }
+ ],
+ "osType": "Linux",
+ "volumes": [
+ {
+ "name": "filesharevolume",
+ "azureFile": {
+ "readOnly": false,
+ "shareName": "[variables('fileShareName')]",
+ "storageAccountName": "[variables('storageAccountName')]",
+ "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value]"
+ }
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+The default value for the mount path is `/mnt/azscripts/azscriptinput`. This is the path in the container instance where it's mounted to the file share.
-The default container image specified in the template is `mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3`. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list).
+The default container image specified in the template is **mcr.microsoft.com/azure-cli:2.9.1**. See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list).
- >[!IMPORTANT]
- > Deployment script uses the available CLI images from Microsoft Container Registry (MCR). It takes about one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli?view=azure-cli-latest&preserve-view=true). If an unsupported version is used, the error message lists the supported versions.
+> [!IMPORTANT]
+> The deployment script uses the available CLI images from Microsoft Container Registry (MCR). It takes about one month to certify a CLI image for a deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli?view=azure-cli-latest&preserve-view=true). If you use an unsupported version, the error message lists the supported versions.
-The template suspends the container instance 1800 seconds. You have 30 minutes before the container instance goes into terminal state and the session ends.
+The template suspends the container instance after 1,800 seconds. You have 30 minutes before the container instance goes into a terminal state and the session ends.
To deploy the template:
@@ -167,11 +365,11 @@ $location = Read-Host -Prompt "Enter the location (i.e. centralus)"
$templateFile = Read-Host -Prompt "Enter the template file path and file name" $resourceGroupName = "${projectName}rg"
-New-azResourceGroup -Location $location -name $resourceGroupName
+New-AzResourceGroup -Location $location -name $resourceGroupName
New-AzResourceGroupDeployment -resourceGroupName $resourceGroupName -TemplateFile $templatefile -projectName $projectName ```
-### Upload deployment script
+### Upload the deployment script
Upload your deployment script to the storage account. The following is a PowerShell example:
@@ -187,40 +385,32 @@ $context = (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $st
Set-AzStorageFileContent -Context $context -ShareName $fileShareName -Source $fileName -Force ```
-You can also upload the file by using the Azure portal and Azure CLI.
+You also can upload the file by using the Azure portal or the Azure CLI.
### Test the deployment script
-1. From the Azure portal, open the resource group where you deployed the container instance and the storage account.
-1. Open the container group. The default container group name is the project name with **cg** appended. You shall see the container instance is in the **Running** state.
-1. Select **Containers** from the left menu. You shall see a container instance. The container instance name is the project name with **container** appended.
+1. In the Azure portal, open the resource group where you deployed the container instance and the storage account.
+1. Open the container group. The default container group name is the project name appended with *cg*. The container instance is shown in the **Running** state.
+1. In the resource menu, select **Containers**. The container instance name is the project name appended with *container*.
![deployment script connect container instance](./media/deployment-script-template-configure-dev/deployment-script-container-instance-connect.png)
-1. Select **Connect**, and then select **Connect**. If you can't connect to the container instance, restart the container group and try again.
+1. Select **Connect**, and then select **Connect**. If you can't connect to the container instance, restart the container group and try again.
1. In the console pane, run the following commands: ```console
- cd deploymentScript
+ cd /mnt/azscripts/azscriptinput
ls
- pwsh ./hello.ps1 "John Dole"
+ ./hello.sh John Dole
``` The output is **Hello John Dole**.
- ![deployment script container instance test](./media/deployment-script-template-configure-dev/deployment-script-container-instance-test.png)
-
-1. If you use the AZ CLI container image, run this code:
-
- ```console
- cd /mnt/azscripts/azscriptinput
- ls
- ./userscript.sh
- ```
+ ![deployment script container instance test](./media/deployment-script-template-configure-dev/deployment-script-container-instance-test-cli.png)
## Use Docker
-You can use a pre-configured docker container image as your deployment script development environment. To install Docker, see [Get Docker](https://docs.docker.com/get-docker/).
+You can use a pre-configured Docker container image as your deployment script development environment. To install Docker, see [Get Docker](https://docs.docker.com/get-docker/).
You also need to configure file sharing to mount the directory, which contains the deployment scripts into Docker container. 1. Pull the deployment script container image to the local computer:
@@ -231,7 +421,7 @@ You also need to configure file sharing to mount the directory, which contains t
The example uses version PowerShell 4.3.0.
- To pull a CLI image from a Microsoft Container Registry (MCR):
+ To pull a CLI image from an MCR:
```command docker pull mcr.microsoft.com/azure-cli:2.0.80
@@ -239,7 +429,7 @@ You also need to configure file sharing to mount the directory, which contains t
This example uses version CLI 2.0.80. Deployment script uses the default CLI containers images found [here](https://hub.docker.com/_/microsoft-azure-cli).
-1. Run the docker image locally.
+1. Run the Docker image locally.
```command docker run -v <host drive letter>:/<host directory name>:/data -it mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3
@@ -259,7 +449,7 @@ You also need to configure file sharing to mount the directory, which contains t
docker run -v d:/docker:/data -it mcr.microsoft.com/azure-cli:2.0.80 ```
-1. The following screenshot shows how to run a PowerShell script, given that you have a _helloworld.ps1_ file in the shared drive.
+1. The following screenshot shows how to run a PowerShell script, given that you have a *helloworld.ps1* file in the shared drive.
![Resource Manager template deployment script docker cmd](./media/deployment-script-template/resource-manager-deployment-script-docker-cmd.png)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/scope-extension-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/scope-extension-resources.md
@@ -2,17 +2,20 @@
title: Scope on extension resource types description: Describes how to use the scope property when deploying extension resource types. ms.topic: conceptual
-ms.date: 10/22/2020
+ms.date: 01/13/2021
--- # Setting scope for extension resources in ARM templates
-An extension resource is a resource that modifies another resource. For example, you can assign a role to a resource to limit access. The role assignment is an extension resource type.
+An extension resource is a resource that modifies another resource. For example, you can assign a role to a resource. The role assignment is an extension resource type.
For a full list of extension resource types, see [Resource types that extend capabilities of other resources](../management/extension-resource-types.md). This article shows how to set the scope for an extension resource type when deployed with an Azure Resource Manager template (ARM template). It describes the scope property that is available for extension resources when applying to a resource.
+> [!NOTE]
+> The scope property is only available to extension resource types. To specify a different scope for a resource type that isn't an extension type, use a nested or linked deployment. For more information, see [resource group deployments](deploy-to-resource-group.md), [subscription deployments](deploy-to-subscription.md), [management group deployments](deploy-to-management-group.md), and [tenant deployments](deploy-to-tenant.md).
+ ## Apply at deployment scope To apply an extension resource type at the target deployment scope, you add the resource to your template, as would with any resource type. The available scopes are [resource group](deploy-to-resource-group.md), [subscription](deploy-to-subscription.md), [management group](deploy-to-management-group.md), and [tenant](deploy-to-tenant.md). The deployment scope must support the resource type.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-architecture.md
@@ -70,31 +70,32 @@ Details of how traffic shall be migrated to new Gateways in specific regions are
| Region name | Gateway IP addresses | | --- | --- |
-| Australia Central | 20.36.105.0 |
-| Australia Central2 | 20.36.113.0 |
+| Australia Central | 20.36.105.0, 20.36.104.6, 20.36.104.7 |
+| Australia Central 2 | 20.36.113.0, 20.36.112.6 |
| Australia East | 13.75.149.87, 40.79.161.1, 13.70.112.9 | | Australia South East | 191.239.192.109, 13.73.109.251, 13.77.48.10 |
-| Brazil South | 104.41.11.5, 191.233.200.14 |
+| Brazil South | 104.41.11.5, 191.233.200.14, 191.234.144.16, 191.234.152.3 |
| Canada Central | 40.85.224.249, 52.246.152.0, 20.38.144.1 |
-| Canada East | 40.86.226.166, 52.242.30.154 |
+| Canada East | 40.86.226.166, 52.242.30.154, 40.69.105.9 , 40.69.105.10 |
| Central US | 13.67.215.62, 52.182.137.15, 23.99.160.139, 104.208.16.96, 104.208.21.1 | | China East | 139.219.130.35 | | China East 2 | 40.73.82.1 | | China North | 139.219.15.17 | | China North 2 | 40.73.50.0 |
-| East Asia | 191.234.2.139, 52.175.33.150, 13.75.32.4 |
+| East Asia | 191.234.2.139, 52.175.33.150, 13.75.32.4, 13.75.32.14 |
| East US | 40.121.158.30, 40.79.153.12, 191.238.6.43, 40.78.225.32 | | East US 2 | 40.79.84.180, 52.177.185.181, 52.167.104.0, 191.239.224.107, 104.208.150.3 |
-| France Central | 40.79.137.0, 40.79.129.1 |
+| France Central | 40.79.137.0, 40.79.129.1, 40.79.137.8, 40.79.145.12 |
+| France South | 40.79.177.10 ,40.79.177.12 |
| Germany Central | 51.4.144.100 | | Germany North East | 51.5.144.179 | | Germany West Central | 51.116.240.0, 51.116.248.0, 51.116.152.0 |
-| India Central | 104.211.96.159 |
+| India Central | 104.211.96.159, 104.211.86.30 , 104.211.86.31 |
| India South | 104.211.224.146 |
-| India West | 104.211.160.80 |
+| India West | 104.211.160.80, 104.211.144.4 |
| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 191.237.240.43, 40.79.192.5 | | Japan West | 104.214.148.156, 40.74.100.192, 191.238.68.11, 40.74.97.10 |
-| Korea Central | 52.231.32.42 |
+| Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23 |
| Korea South | 52.231.200.86 | | North Central US | 23.96.178.199, 23.98.55.75, 52.162.104.33 | | North Europe | 40.113.93.91, 191.235.193.75, 52.138.224.1, 13.74.104.113 |
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/data-discovery-and-classification-overview.md
@@ -17,7 +17,7 @@ tags: azure-synapse
# Data Discovery & Classification [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)]
-Data Discovery & Classification is built into Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. It provides advanced capabilities for discovering, classifying, labeling, and reporting the sensitive data in your databases.
+Data Discovery & Classification is built into Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. It provides basic capabilities for discovering, classifying, labeling, and reporting the sensitive data in your databases.
Your most sensitive data might include business, financial, healthcare, or personal information. Discovering and classifying this data can play a pivotal role in your organization's information-protection approach. It can serve as infrastructure for:
@@ -30,11 +30,11 @@ Your most sensitive data might include business, financial, healthcare, or perso
## <a id="what-is-dc"></a>What is Data Discovery & Classification?
-Data Discovery & Classification introduces a set of advanced services and new capabilities in Azure. It forms a new information-protection paradigm for SQL Database, SQL Managed Instance, and Azure Synapse, aimed at protecting the data and not just the database. The paradigm includes:
+Data Discovery & Classification introduces a set of basic services and new capabilities in Azure. It forms a new information-protection paradigm for SQL Database, SQL Managed Instance, and Azure Synapse, aimed at protecting the data and not just the database. The paradigm includes:
- **Discovery and recommendations:** The classification engine scans your database and identifies columns that contain potentially sensitive data. It then provides you with an easy way to review and apply recommended classification via the Azure portal. -- **Labeling:** You can apply sensitivity-classification labels persistently to columns by using new metadata attributes that have been added to the SQL Server database engine. This metadata can then be used for advanced, sensitivity-based auditing and protection scenarios.
+- **Labeling:** You can apply sensitivity-classification labels persistently to columns by using new metadata attributes that have been added to the SQL Server database engine. This metadata can then be used for sensitivity-based auditing and protection scenarios.
- **Query result-set sensitivity:** The sensitivity of a query result set is calculated in real time for auditing purposes.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/gateway-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/gateway-migration.md
@@ -21,6 +21,25 @@ Customers will be notified via email and in the Azure portal well in advance of
## Status updates # [In progress](#tab/in-progress-ip)
+## January 2021
+New SQL Gateways are being added to the following regions:
+
+- Australia Central : 20.36.104.6 , 20.36.104.7
+- Australia Central 2 : 20.36.112.6
+- Brazil South : 191.234.144.16 ,191.234.152.3
+- Canada East : 40.69.105.9 ,40.69.105.10
+- India Central : 104.211.86.30 , 104.211.86.31
+- East Asia : 13.75.32.14
+- France Central : 40.79.137.8, 40.79.145.12
+- France South : 40.79.177.10 ,40.79.177.12
+- Korea Central : 52.231.17.22 ,52.231.17.23
+- India West : 104.211.144.4
+
+These SQL Gateways shall start accepting customer traffic on 31 January 2021
+
+# [Completed](#tab/completed-ip)
+The following gateway migrations are complete:
+ ### October 2020 New SQL Gateways are being added to the following regions:
@@ -64,9 +83,6 @@ New SQL Gateways are being added to the following regions. These SQL Gateways sh
Existing SQL Gateways will start accepting traffic in the following regions. These SQL Gateways shall start accepting customer traffic on **1 September 2020** : - Japan East : 40.79.184.8, 40.79.192.5
-# [Completed](#tab/completed-ip)
-
-The following gateway migrations are complete:
### August 2020
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-general-purpose-business-critical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-general-purpose-business-critical.md
@@ -72,6 +72,7 @@ The following factors affect the amount of storage used for data and log files,
- For storage in the premium or business critical service tiers, increase or decrease the size in 250-GB increments. - In the general purpose service tier, `tempdb` uses an attached SSD, and this storage cost is included in the vCore price. - In the business critical service tier, `tempdb` shares the attached SSD with the MDF and LDF files, and the `tempdb` storage cost is included in the vCore price.
+- In the DTU premium service tier, `tempdb` shares the attached SSD with MDF and LDF files.
- The storage size for a SQL Managed Instance must be specified in multiples of 32 GB.
batch https://docs.microsoft.com/en-us/azure/batch/batch-virtual-network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-virtual-network.md
@@ -2,7 +2,7 @@
title: Provision a pool in a virtual network description: How to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. ms.topic: how-to
-ms.date: 06/26/2020
+ms.date: 01/13/2021
ms.custom: seodec18 ---
cdn https://docs.microsoft.com/en-us/azure/cdn/cdn-create-new-endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-create-new-endpoint.md
@@ -64,7 +64,7 @@ After you've created a CDN profile, you use it to create an endpoint.
![CDN endpoint](./media/cdn-create-new-endpoint/cdn-endpoint-success.png)
- The time it takes for the endpoint to propagate depends on the pricing tier selected when you created the profile. **Standard Akamai** usually completes within one minute, **Standard Microsoft** in 10 minutes, and **Standard Verizon** and **Premium Verizon** in up to 90 minutes.
+ The time it takes for the endpoint to propagate depends on the pricing tier selected when you created the profile. **Standard Akamai** usually completes within one minute, **Standard Microsoft** in 10 minutes, and **Standard Verizon** and **Premium Verizon** in up to 30 minutes.
## Clean up resources
cloud-shell https://docs.microsoft.com/en-us/azure/cloud-shell/private-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/private-vnet.md
@@ -18,8 +18,6 @@ ms.author: damaerte
--- # Deploy Cloud Shell into an Azure virtual network
-> [!NOTE]
-> This functionality is in public preview.
A regular Cloud Shell session runs in a container in a Microsoft network separate from your resources. This means that commands running inside the container cannot access resources that can only be accessed from a specific virtual network. For example, you cannot use SSH to connect from Cloud Shell to a virtual machine that only has a private IP address, or use kubectl to connect to a Kubernetes cluster which has locked down access.
@@ -60,7 +58,7 @@ As in standard Cloud Shell, a storage account is required while using Cloud Shel
## Virtual network deployment limitations * Due to the additional networking resources involved, starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.
-* During the preview, fewer regions are supported for Cloud Shell in a virtual network. This is currently limited to: WestUS and WestCentralUS.
+* All Cloud Shell regions apart from Central India are currently supported.
* [Azure Relay](../azure-relay/relay-what-is-it.md) is not a free service, please view their [pricing](https://azure.microsoft.com/pricing/details/service-bus/). In the Cloud Shell scenario, one hybrid connection is used for each administrator while they are using Cloud Shell. The connection will automatically be shut down after the Cloud Shell session is complete.
@@ -110,4 +108,4 @@ Connect to Cloud Shell, you will be prompted with the first run experience. Sele
![Illustrates the Cloud Shell isolated VNET first experience settings.](media/private-vnet/vnet-settings.png) ## Next steps
-[Learn about Azure Virtual Networks](../virtual-network/virtual-networks-overview.md)
\ No newline at end of file
+[Learn about Azure Virtual Networks](../virtual-network/virtual-networks-overview.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/spatial-analysis-camera-placement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-camera-placement.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: computer-vision ms.topic: conceptual
-ms.date: 09/11/2020
+ms.date: 01/12/2021
ms.author: aahi ---
@@ -48,7 +48,7 @@ The following illustration shows the elevation view for person walking direction
## Camera height
-Generally, cameras should be mounted 12-14 feet from the ground. When planning your camera mounting in this range, consider obstructions (for example: shelving, hanging lights, hanging signage, and displays) that might affect the camera view, and then adjust the height as necessary.
+Generally, cameras should be mounted 12-14 feet from the ground. For Face mask detection, we recommend cameras to be mounted 8-12 feet from the ground. When planning your camera mounting in this range, consider obstructions (for example: shelving, hanging lights, hanging signage, and displays) that might affect the camera view, and then adjust the height as necessary.
## Camera-to-focal-point distance
@@ -64,7 +64,7 @@ From above, it looks like this:
![How camera-to-focal-point-distance is measured from above](./media/spatial-analysis/camera-focal-point-above.png)
-Use the table below to determine the camera's distance from the focal point based on specific mounting heights. These distances are for optimal placement. Note that the table provides guidance below the 12'-14' recommendation since some ceilings can limit height.
+Use the table below to determine the camera's distance from the focal point based on specific mounting heights. These distances are for optimal placement. Note that the table provides guidance below the 12'-14' recommendation since some ceilings can limit height. For Face mask detection, recommended camera-to-focal-point distance (min/max) is 4ΓÇÖ-10ΓÇÖ for camera height between 8ΓÇÖ to 12ΓÇÖ.
| Camera height | Camera-to-focal-point distance (min/max) | | ------------- | ---------------------------------------- |
@@ -87,7 +87,7 @@ This section describes acceptable camera angle mounting ranges. These mounting r
### Line configuration
-The following table shows recommendations for cameras configured for the **cognitiveservices.vision.spatialanalysis-personcrossingline** operation.
+The following table shows recommendations for cameras configured for the **cognitiveservices.vision.spatialanalysis-personcrossingline** operation. For Face mask detection, +/-30 degrees is the optimal camera mounting angle for camera height between 8ΓÇÖ to 12ΓÇÖ.
| Camera height | Camera-to-focal-point distance | Optimal camera mounting angle (min/max) | | ------------- | ------------------------------ | ------------------------------------------ |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/spatial-analysis-container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: computer-vision ms.topic: conceptual
-ms.date: 11/06/2020
+ms.date: 01/12/2021
ms.author: aahi ---
@@ -19,7 +19,7 @@ The spatial analysis container enables you to analyze real-time streaming video
## Prerequisites * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource <span class="docon docon-navigate-external x-hidden-focus"></span></a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource <span class="docon docon-navigate-external x-hidden-focus"></span></a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
* You will need the key and endpoint from the resource you create to run the spatial analysis container. You'll use your key and endpoint later.
@@ -56,6 +56,9 @@ In this article, you will download and install the following software packages.
* [Docker CE](https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-engine---community-1) and [NVIDIA-Docker2](https://github.com/NVIDIA/nvidia-docker) * [Azure IoT Edge](../../iot-edge/how-to-install-iot-edge.md) runtime.
+#### [Azure VM with GPU](#tab/virtual-machine)
+In our example, we will utilize an [NC series VM](https://docs.microsoft.com/azure/virtual-machines/nc-series?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json) that has one K80 GPU.
+ --- | Requirement | Description |
@@ -80,7 +83,7 @@ You won't be able to run the container if your Azure subscription has not been a
## Set up the host computer
-It is recommended that you use an Azure Stack Edge device for your host computer. Click **Desktop Machine** if you're configuring a different device.
+It is recommended that you use an Azure Stack Edge device for your host computer. Click **Desktop Machine** if you're configuring a different device, or **Virtual Machine** if you're utilizing a VM.
#### [Azure Stack Edge device](#tab/azure-stack-edge)
@@ -247,13 +250,13 @@ Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters
```bash curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
-az login
-az account set --subscription <name or ID of Azure Subscription>
-az group create --name "test-resource-group" --location "WestUS"
+sudo az login
+sudo az account set --subscription <name or ID of Azure Subscription>
+sudo az group create --name "test-resource-group" --location "WestUS"
-az iot hub create --name "test-iot-hub-123" --sku S1 --resource-group "test-resource-group"
+sudo az iot hub create --name "test-iot-hub-123" --sku S1 --resource-group "test-resource-group"
-az iot hub device-identity create --hub-name "test-iot-hub-123" --device-id "my-edge-device" --edge-enabled
+sudo az iot hub device-identity create --hub-name "test-iot-hub-123" --device-id "my-edge-device" --edge-enabled
``` If the host computer isn't an Azure Stack Edge device, you will need to install [Azure IoT Edge](../../iot-edge/how-to-install-iot-edge.md) version 1.0.9. Follow these steps to download the correct version:
@@ -292,7 +295,7 @@ Next, register the host computer as an IoT Edge device in your IoT Hub instance,
You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI. ```bash
-az iot hub device-identity show-connection-string --device-id my-edge-device --hub-name test-iot-hub-123
+sudo az iot hub device-identity show-connection-string --device-id my-edge-device --hub-name test-iot-hub-123
``` On the host computer open `/etc/iotedge/config.yaml` for editing. Replace `ADD DEVICE CONNECTION STRING HERE` with the connection string. Save and close the file.
@@ -302,15 +305,100 @@ Run this command to restart the IoT Edge service on the host computer.
sudo systemctl restart iotedge ```
-Deploy the spatial analysis container as an IoT Module on the host computer, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../../iot-edge/how-to-deploy-modules-cli.md). If you're using the portal, set the image URI to the location of your Azure Container Registry.
+Deploy the spatial analysis container as an IoT Module on the host computer, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account-cli?tabs=windows). If you're using the portal, set the image URI to the location of your Azure Container Registry.
Use the below steps to deploy the container using the Azure CLI.
+#### [Azure VM with GPU](#tab/virtual-machine)
+
+An Azure Virtual Machine with a GPU can also be used to run spatial analysis. The example below will use an [NC series](https://docs.microsoft.com/azure/virtual-machines/nc-series?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json) VM that has one K80 GPU.
+
+#### Create the VM
+
+Open the [Create a Virtual Machine](https://ms.portal.azure.com/#create/Microsoft.VirtualMachine) wizard in the Azure portal.
+
+Give your VM a name and select the region to be (US) West US 2. Be sure to set `Availability Options` to "No infrastructure redundancy required". Refer to the below figure for the complete configuration and the next step for help locating the correct VM size.
+
+:::image type="content" source="media/spatial-analysis/virtual-machine-instance-details.png" alt-text="Virtual machine configuration details." lightbox="media/spatial-analysis/virtual-machine-instance-details.png":::
+
+To locate the VM size, select "See all sizes" and then view the list for "Non-premium storage VM sizes", shown below.
+
+:::image type="content" source="media/spatial-analysis/virtual-machine-sizes.png" alt-text="Virtual machine sizes." lightbox="media/spatial-analysis/virtual-machine-sizes.png":::
+
+Then, select either **NC6** or **NC6_Promo**.
+
+:::image type="content" source="media/spatial-analysis/promotional-selection.png" alt-text="promotional selection" lightbox="media/spatial-analysis/promotional-selection.png":::
+
+Next, Create the VM. Once created, navigate to the VM resource in the Azure portal and select `Extensions` from the left pane. The extensions window will appear with all available extensions. Select `NVIDIA GPU Driver Extension`, click create, and complete the wizard.
+
+Once the extension is successfully applied, navigate to the VM main page in the Azure portal and click `Connect`. The VM can be accessed either through SSH or RDP. RDP will be helpful as it will be enable viewing of the visualizer window (explained later). Configure the RDP access by following [these steps](https://docs.microsoft.com/azure/virtual-machines/linux/use-remote-desktop) and opening a remote desktop connection to the VM.
+
+### Verify Graphics Drivers are Installed
+
+Run the following command to verify that the graphics drivers have been successfully installed.
+
+```bash
+nvidia-smi
+```
+
+You should see the following output.
+
+![NVIDIA driver output](media/spatial-analysis/nvidia-driver-output.png)
+
+### Install Docker CE and nvidia-docker2 on the VM
+
+Run the following commands one at a time in order to install Docker CE and nvidia-docker2 on the VM.
+
+Install Docker CE on the host computer.
+
+```bash
+sudo apt-get update
+```
+```bash
+sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
+```
+```bash
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
+```
+```bash
+sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
+```
+```bash
+sudo apt-get update
+```
+```bash
+sudo apt-get install -y docker-ce docker-ce-cli containerd.io
+```
++
+Install the *nvidia-docker-2* software package.
+
+```bash
+distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+```
+```bash
+curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
+```
+```bash
+curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
+```
+```bash
+sudo apt-get update
+```
+```bash
+sudo apt-get install -y docker-ce nvidia-docker2
+```
+```bash
+sudo systemctl restart docker
+```
+
+Now that you have set up and configured your VM, follow the steps below to deploy the spatial analysis container.
+ --- ### IoT Deployment manifest
-To streamline container deployment on multiple host computers, you can create a deployment manifest file to specify the container creation options, and environment variables. You can find an example of a deployment manifest [for Azure Stack Edge](https://go.microsoft.com/fwlink/?linkid=2142179) and [other desktop machines](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) on Github.
+To streamline container deployment on multiple host computers, you can create a deployment manifest file to specify the container creation options, and environment variables. You can find an example of a deployment manifest [for Azure Stack Edge](https://go.microsoft.com/fwlink/?linkid=2142179), [other desktop machines](https://go.microsoft.com/fwlink/?linkid=2152270), and [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) on GitHub.
The following table shows the various Environment Variables used by the IoT Edge Module. You can also set them in the deployment manifest linked above, using the `env` attribute in `spatialanalysis`:
@@ -322,21 +410,24 @@ The following table shows the various Environment Variables used by the IoT Edge
| ARCHON_NODES_LOG_LEVEL | Info; Verbose | Logging level, select one of the two values| | OMP_WAIT_POLICY | PASSIVE | Do not modify| | QT_X11_NO_MITSHM | 1 | Do not modify|
-| API_KEY | your API Key| Collect this value from Azure portal from your Computer Vision resource. You can find it in the **Key and endpoint** section for your resource. |
-| BILLING_ENDPOINT | your Endpoint URI| Collect this value from Azure portal from your Computer Vision resource. You can find it in the **Key and endpoint** section for your resource.|
+| APIKEY | your API Key| Collect this value from Azure portal from your Computer Vision resource. You can find it in the **Key and endpoint** section for your resource. |
+| BILLING | your Endpoint URI| Collect this value from Azure portal from your Computer Vision resource. You can find it in the **Key and endpoint** section for your resource.|
| EULA | accept | This value needs to be set to *accept* for the container to run | | DISPLAY | :1 | This value needs to be same as the output of `echo $DISPLAY` on the host computer. Azure Stack Edge devices do not have a display. This setting is not applicable|-
+| ARCHON_GRAPH_READY_TIMEOUT | 600 | Add this environment variable if your GPU is **not** T4 or NVIDIA 2080 Ti|
+| ORT_TENSORRT_ENGINE_CACHE_ENABLE | 0 | Add this environment variable if your GPU is **not** T4 or NVIDIA 2080 Ti|
+| KEY_ENV | ASE Encryption key | Add this environment variable if Video_URL is an obfuscated string |
+| IV_ENV | Initialization vector | Add this environment variable if Video_URL is an obfuscated string|
> [!IMPORTANT] > The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-Once you update the Deployment manifest for [Azure Stack Edge devices](https://go.microsoft.com/fwlink/?linkid=2142179) or [a desktop machine](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) with your own settings and selection of operations, you can use the below [Azure CLI](../../iot-edge/how-to-deploy-modules-cli.md) command to deploy the container on the host computer, as an IoT Edge Module.
+Once you update the Deployment manifest for [Azure Stack Edge devices](https://go.microsoft.com/fwlink/?linkid=2142179), [a desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270) or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with your own settings and selection of operations, you can use the below [Azure CLI](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account-cli?tabs=windows) command to deploy the container on the host computer, as an IoT Edge Module.
```azurecli
-az login
-az extension add --name azure-iot
-az iot edge set-modules --hub-name "<IoT Hub name>" --device-id "<IoT Edge device name>" --content DeploymentManifest.json --subscription "<subscriptionId>"
+sudo az login
+sudo az extension add --name azure-iot
+sudo az iot edge set-modules --hub-name "<IoT Hub name>" --device-id "<IoT Edge device name>" --content DeploymentManifest.json --subscription "<subscriptionId>"
``` |Parameter |Description |
@@ -362,7 +453,7 @@ You will need to use [spatial analysis operations](spatial-analysis-operations.m
## Redeploy or delete the deployment
-If you need to update the deployment, you need to make sure your previous deployments are successfully deployed, or you need to delete IoT Edge device deployments that did not complete. Otherwise, those deployments will continue, leaving the system in a bad state. You can use the Azure portal, or the [Azure CLI](/cli/azure/ext/azure-cli-iot-ext/iot/edge/deployment).
+If you need to update the deployment, you need to make sure your previous deployments are successfully deployed, or you need to delete IoT Edge device deployments that did not complete. Otherwise, those deployments will continue, leaving the system in a bad state. You can use the Azure portal, or the [Azure CLI](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account-cli?tabs=windows).
## Use the output generated by the container
@@ -381,25 +472,25 @@ Navigate to the **Container** section, and either create a new container or use
Click on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
-Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179) or another [desktop machine](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the spatial analysis container with the updated manifest. See the example below.
+Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the spatial analysis container with the updated manifest. See the example below.
The spatial analysis module will start consuming video file and will continuously auto replay as well. ```json "zonecrossing": {
- "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "Replace http url here",
- "VIDEO_SOURCE_ID": "personcountgraph",
- "VIDEO_IS_LIVE": false,
- "VIDEO_DECODE_GPU_INDEX": 0,
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0 }",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"threshold\":35.0}]}"
- }
- },
+ "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "Replace http url here",
+ "VIDEO_SOURCE_ID": "personcountgraph",
+ "VIDEO_IS_LIVE": false,
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
+ }
+ },
```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/spatial-analysis-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: computer-vision ms.topic: conceptual
-ms.date: 09/11/2020
+ms.date: 01/12/2021
ms.author: aahi ---
@@ -63,7 +63,7 @@ az iot hub list
az ad sp create-for-rbac --role="Monitoring Metrics Publisher" --name "<principal name>" --scopes="<resource ID of IoT Hub>" ```
-In the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179) or other [desktop machine](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json), look for the *telegraf* module, and replace the following values with the Service Principal information from the previous step and redeploy.
+In the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189), look for the *telegraf* module, and replace the following values with the Service Principal information from the previous step and redeploy.
```json
@@ -124,7 +124,7 @@ You can use `iotedge` command line tool to check the status and logs of the runn
## Collect log files with the diagnostics container
-Spatial analysis generates Docker debugging logs that you can use to diagnose runtime issues, or include in support tickets. The spatial analysis diagnostics module is available in the Microsoft Container Registry for you to download. In the manifest deployment file for your [Azure Stack Edge Device](https://go.microsoft.com/fwlink/?linkid=2142179) or other [desktop machine](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json), look for the *diagnostics* module.
+Spatial analysis generates Docker debugging logs that you can use to diagnose runtime issues, or include in support tickets. The spatial analysis diagnostics module is available in the Microsoft Container Registry for you to download. In the manifest deployment file for your [Azure Stack Edge Device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the *diagnostics* module.
In the "env" section add the following configuration:
@@ -183,13 +183,13 @@ It can also be set through the IoT Edge Module Twin document either globally, fo
> The `diagnostics` module does not affect the logging content, it is only assists in collecting, filtering, and uploading existing logs. > You must have Docker API version 1.40 or higher to use this module.
-The sample deployment manifest file for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179) or other [desktop machine](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) includes a module named `diagnostics` that collects and uploads logs. This module is disabled by default and should be enabled through the IoT Edge module configuration when you need to access logs.
+The sample deployment manifest file for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) includes a module named `diagnostics` that collects and uploads logs. This module is disabled by default and should be enabled through the IoT Edge module configuration when you need to access logs.
The `diagnostics` collection is on-demand and controlled via an IoT Edge direct method, and can send logs to an Azure Blob Storage. ### Configure diagnostics upload targets
-From the IoT Edge portal, select your device and then the **diagnostics** module. In the sample Deployment manifest file for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179) or other [desktop machines](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json), look for the **Environment Variables** section for diagnostics, named `env`, and add the following information:
+From the IoT Edge portal, select your device and then the **diagnostics** module. In the sample Deployment manifest file for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machines](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the **Environment Variables** section for diagnostics, named `env`, and add the following information:
**Configure Upload to Azure Blob Storage**
@@ -401,4 +401,4 @@ kubectl logs <pod-name> -n <namespace> --all-containers
* [Deploy a People Counting web application](spatial-analysis-web-app.md) * [Configure spatial analysis operations](./spatial-analysis-operations.md) * [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
\ No newline at end of file
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/spatial-analysis-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: computer-vision ms.topic: conceptual
-ms.date: 09/01/2020
+ms.date: 01/12/2021
ms.author: aahi ---
@@ -20,9 +20,9 @@ The spatial analysis container implements the following operations:
| Operation Identifier| Description| |---------|---------|
-| cognitiveservices.vision.spatialanalysis-personcount | Counts people in a designated zone in the camera's field of view. <br> Emits an initial _personCountEvent_ event and then _personCountEvent_ events when the count changes. |
+| cognitiveservices.vision.spatialanalysis-personcount | Counts people in a designated zone in the camera's field of view. The zone must be fully covered by a single camera in order for PersonCount to record an accurate total. <br> Emits an initial _personCountEvent_ event and then _personCountEvent_ events when the count changes. |
| cognitiveservices.vision.spatialanalysis-personcrossingline | Tracks when a person crosses a designated line in the camera's field of view. <br>Emits a _personLineEvent_ event when the person crosses the line and provides directional info.
-| cognitiveservices.vision.spatialanalysis-personcrossingpolygon | Tracks when a person crosses a designated line in the camera's field of view. <br> Emits a _personLineEvent_ event when the person crosses the zone and provides directional info. |
+| cognitiveservices.vision.spatialanalysis-personcrossingpolygon | Emits a _personZoneEnterExitEvent_ event when a person enters or exits the zone and provides directional info with the numbered side of the zone that was crossed. Emits a _personZoneDwellTimeEvent_ when the person exits the zone and provides directional info as well as the number of milliseconds the person spent inside the zone. |
| cognitiveservices.vision.spatialanalysis-persondistance | Tracks when people violate a distance rule. <br> Emits a _personDistanceEvent_ periodically with the location of each distance violation. | All above the operations are also available in the `.debug` version, which have the capability to visualize the video frames as they are being processed. You will need to run `xhost +` on the host computer to enable the visualization of video frames and events.
@@ -31,7 +31,7 @@ All above the operations are also available in the `.debug` version, which have
|---------|---------| | cognitiveservices.vision.spatialanalysis-personcount.debug | Counts people in a designated zone in the camera's field of view. <br> Emits an initial _personCountEvent_ event and then _personCountEvent_ events when the count changes. | | cognitiveservices.vision.spatialanalysis-personcrossingline.debug | Tracks when a person crosses a designated line in the camera's field of view. <br>Emits a _personLineEvent_ event when the person crosses the line and provides directional info.
-| cognitiveservices.vision.spatialanalysis-personcrossingpolygon.debug | Tracks when a person crosses a designated line in the camera's field of view. <br> Emits a _personLineEvent_ event when the person crosses the zone and provides directional info. |
+| cognitiveservices.vision.spatialanalysis-personcrossingpolygon.debug | Emits a _personZoneEnterExitEvent_ event when a person enters or exits the zone and provides directional info with the numbered side of the zone that was crossed. Emits a _personZoneDwellTimeEvent_ when the person exits the zone and provides directional info as well as the number of milliseconds the person spent inside the zone. |
| cognitiveservices.vision.spatialanalysis-persondistance.debug | Tracks when people violate a distance rule. <br> Emits a _personDistanceEvent_ periodically with the location of each distance violation. | Spatial analysis can also be run with [Live Video Analytics](../../media-services/live-video-analytics-edge/spatial-analysis-tutorial.md) as their Video AI module.
@@ -42,13 +42,13 @@ Spatial analysis can also be run with [Live Video Analytics](../../media-service
|---------|---------| | cognitiveservices.vision.spatialanalysis-personcount.livevideoanalytics | Counts people in a designated zone in the camera's field of view. <br> Emits an initial _personCountEvent_ event and then _personCountEvent_ events when the count changes. | | cognitiveservices.vision.spatialanalysis-personcrossingline.livevideoanalytics | Tracks when a person crosses a designated line in the camera's field of view. <br>Emits a _personLineEvent_ event when the person crosses the line and provides directional info.
-| cognitiveservices.vision.spatialanalysis-personcrossingpolygon.livevideoanalytics | Tracks when a person crosses a designated line in the camera's field of view. <br> Emits a _personLineEvent_ event when the person crosses the zone and provides directional info. |
+| cognitiveservices.vision.spatialanalysis-personcrossingpolygon.livevideoanalytics | Emits a _personZoneEnterExitEvent_ event when a person enters or exits the zone and provides directional info with the numbered side of the zone that was crossed. Emits a _personZoneDwellTimeEvent_ when the person exits the zone and provides directional info as well as the number of milliseconds the person spent inside the zone. |
| cognitiveservices.vision.spatialanalysis-persondistance.livevideoanalytics | Tracks when people violate a distance rule. <br> Emits a _personDistanceEvent_ periodically with the location of each distance violation. | Live Video Analytics operations are also available in the `.debug` version (e.g. cognitiveservices.vision.spatialanalysis-personcount.livevideoanalytics.debug) which has the capability to visualize the video frames as being processed. You will need to run `xhost +` on the host computer to enable the visualization of the video frames and events > [!IMPORTANT]
-> The computer vision AI models detect and locate human presence in video footage and output by using a bounding box around a human body. The AI models do not attempt to detect faces or discover the identities or demographics of individuals.
+> The computer vision AI models detect and locate human presence in video footage and output by using a bounding box around a human body. The AI models do not attempt to discover the identities or demographics of individuals.
These are the parameters required by each of these spatial analysis operations.
@@ -56,12 +56,14 @@ These are the parameters required by each of these spatial analysis operations.
|---------|---------| | Operation ID | The Operation Identifier from table above.| | enabled | Boolean: true or false|
-| VIDEO_URL| The RTSP url for the camera device(Example: `rtsp://username:password@url`). Spatial analysis supports H.264 encoded stream either through RTSP, http, or mp4 |
+| VIDEO_URL| The RTSP url for the camera device (Example: `rtsp://username:password@url`). Spatial analysis supports H.264 encoded stream either through RTSP, http, or mp4. Video_URL can be provided as an obfuscated base64 string value using AES encryption, and if the video url is obfuscated then `KEY_ENV` and `IV_ENV` need to be provided as environment variables. Sample utility to generate keys and encryption can be found [here](https://docs.microsoft.com/dotnet/api/system.security.cryptography.aesmanaged?view=net-5.0&preserve-view=true). |
| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This will be returned with the event JSON output.| | VIDEO_IS_LIVE| True for camera devices; false for recorded videos.| | VIDEO_DECODE_GPU_INDEX| Which GPU to decode the video frame. By default it is 0. Should be the same as the `gpu_index` in other node config like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.|
+| INPUT_VIDEO_WIDTH | Input video/stream's frame width (e.g. 1920). Its an optional field and if provided frame will be scaled to this dimension but will still preserve the aspect ratio.|
| DETECTOR_NODE_CONFIG | JSON indicating which GPU to run the detector node on. Should be in the following format: `"{ \"gpu_index\": 0 }",`| | SPACEANALYTICS_CONFIG | JSON configuration for zone and line as outlined below.|
+| ENABLE_FACE_MASK_CLASSIFIER | `True` to enable detecting people wearing face masks in the video stream, `False` to disable it. By default this is disabled. Face mask detection requires input video width parameter to be 1920 `"INPUT_VIDEO_WIDTH": 1920`. The face mask attribute will not be returned if detected people are not facing the camera or are too far from it. Refer to [camera placement](spatial-analysis-camera-placement.md) guide for more information |
### Zone configuration for cognitiveservices.vision.spatialanalysis-personcount
@@ -70,14 +72,14 @@ These are the parameters required by each of these spatial analysis operations.
```json { "zones":[{
- "name": "lobbycamera"
+ "name": "lobbycamera",
"polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
- "threshold": 50.00,
"events":[{ "type": "count", "config":{ "trigger": "event",
- "output_frequency": 1
+ "threshold": 16.00,
+ "focus": "footprint"
} }] }
@@ -92,6 +94,7 @@ These are the parameters required by each of these spatial analysis operations.
| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcount** this should be `count`.| | `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not. | `interval` | string| A time in seconds that the person count will be aggregated before an event is fired. The operation will continue to analyze the scene at a constant rate and returns the most common count over that interval. The aggregation interval is applicable to both `event` and `interval`.|
+| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
### Line configuration for cognitiveservices.vision.spatialanalysis-personcrossingline
@@ -99,20 +102,31 @@ This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
```json {
-"lines":[{
- "name": "doorcamera"
- "line": {
- "start": {"x": 0, "y": 0.5},
- "end": {"x": 1, "y": 0.5}
- },
- "threshold": 50.00,
- "events":[{
- "type": "linecrossing",
- "config":{
- "trigger": "event"
- }
- }]
- }]
+ "lines": [
+ {
+ "name": "doorcamera",
+ "line": {
+ "start": {
+ "x": 0,
+ "y": 0.5
+ },
+ "end": {
+ "x": 1,
+ "y": 0.5
+ }
+ },
+ "events": [
+ {
+ "type": "linecrossing",
+ "config": {
+ "trigger": "event",
+ "threshold": 16.00,
+ "focus": "footprint"
+ }
+ }
+ ]
+ }
+ ]
} ```
@@ -126,6 +140,7 @@ This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
| `threshold` | float| Events are egressed when the confidence of the AI models is greater or equal this value. | | `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingline** this should be `linecrossing`.| |`trigger`|string|The type of trigger for sending an event.<br>Supported Values: "event": fire when someone crosses the line.|
+| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
### Zone configuration for cognitiveservices.vision.spatialanalysis-personcrossingpolygon
@@ -133,17 +148,31 @@ This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
```json {
-"zones":[{
- "name": "queuecamera"
- "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
- "threshold": 50.00,
- "events":[{
- "type": "zone_crossing",
- "config":{
- "trigger": "event"
- }
- }]
- }]
+"zones":[
+ {
+ "name": "queuecamera",
+ "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
+ "events":[{
+ "type": "zonecrossing",
+ "config":{
+ "trigger": "event",
+ "threshold": 48.00,
+ "focus": "footprint"
+ }
+ }]
+ },
+ {
+ "name": "queuecamera1",
+ "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
+ "events":[{
+ "type": "zonedwelltime",
+ "config":{
+ "trigger": "event",
+ "threshold": 16.00,
+ "focus": "footprint"
+ }
+ }]
+ }]
} ```
@@ -153,8 +182,9 @@ This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
| `name` | string| Friendly name for this zone.| | `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are tracked or counted. The float values represent the position of the vertex relative to the top,left corner. To calculate the absolute x, y values, you multiply these values with the frame size. | `threshold` | float| Events are egressed when the confidence of the AI models is greater or equal this value. |
-| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingpolygon** this should be `enter` or `exit`.|
+| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingpolygon** this should be `zonecrossing` or `zonedwelltime`.|
| `trigger`|string|The type of trigger for sending an event<br>Supported Values: "event": fire when someone enters or exits the zone.|
+| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
### Zone configuration for cognitiveservices.vision.spatialanalysis-persondistance
@@ -163,19 +193,20 @@ This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
```json { "zones":[{
- "name": "lobbycamera",
- "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
- "threshold": 35.00,
- "events":[{
- "type": "persondistance",
- "config":{
- "trigger": "event",
- "output_frequency":1,
- "minimum_distance_threshold":6.0,
- "maximum_distance_threshold":35.0
- }
- }]
- }]
+ "name": "lobbycamera",
+ "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
+ "events":[{
+ "type": "persondistance",
+ "config":{
+ "trigger": "event",
+ "output_frequency":1,
+ "minimum_distance_threshold":6.0,
+ "maximum_distance_threshold":35.0,
+ "threshold": 16.00,
+ "focus": "footprint"
+ }
+ }]
+ }]
} ```
@@ -191,6 +222,7 @@ This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
| `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The output_frequency is applicable to both `event` and `interval`.| | `minimum_distance_threshold` | float| A distance in feet that will trigger a "TooClose" event when people are less than that distance apart.| | `maximum_distance_threshold` | float| A distance in feet that will trigger a "TooFar" event when people are greater than that distance apart.|
+| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
This is an example of a JSON input for the DETECTOR_NODE_CONFIG parameter that configures a **cognitiveservices.vision.spatialanalysis-persondistance** zone.
@@ -205,8 +237,17 @@ This is an example of a JSON input for the DETECTOR_NODE_CONFIG parameter that c
|---------|---------|---------| | `gpu_index` | string| The GPU index on which this operation will run.| | `do_calibration` | string | Indicates that calibration is turned on. `do_calibration` must be true for **cognitiveservices.vision.spatialanalysis-persondistance** to function properly.|-
-See the [camera placement](spatial-analysis-camera-placement.md) guidelines to learn about zone and line configurations.
+| `enable_recalibration` | bool | Indicates whether automatic recalibration is turned on. Default is `true`.|
+| `calibration_quality_check_frequency_seconds` | int | Minimum number of seconds between each quality check to determine whether or not recalibration is needed. Default is `86400` (24 hours). Only used when `enable_recalibration=True`.|
+| `calibration_quality_check_sampling_num` | int | Number of randomly selected stored data samples to use per quality check error measurement. Default is `80`. Only used when `enable_recalibration=True`.|
+| `calibration_quality_check_sampling_times` | int | Number of times error measurements will be performed on different sets of randomly selected data samples per quality check. Default is `5`. Only used when `enable_recalibration=True`.|
+| `calibration_quality_check_sample_collect_frequency_seconds` | int | Minimum number of seconds between collecting new data samples for recalibration and quality checking. Default is `300` (5 minutes). Only used when `enable_recalibration=True`.|
+| `calibration_quality_check_one_round_sample_collect_num` | int | Minimum number of new data samples to collect per round of sample collection. Default is `10`. Only used when `enable_recalibration=True`.|
+| `calibration_quality_check_queue_max_size` | int | Maximum number of data samples to store when camera model is calibrated. Default is `1000`. Only used when `enable_recalibration=True`.|
+| `recalibration_score` | int | Maximum quality threshold to begin recalibration. Default is `75`. Only used when `enable_recalibration=True`. Calibration quality is calculated based on an inverse relationship with image target reprojection error. Given detected targets in 2D image frames, the targets are projected into 3D space and re-projected back to the 2D image frame using existing camera calibration parameters. The reprojection error is measured by the average distances between the detected targets and the re-projected targets.|
+| `enable_breakpad`| bool | Indicates whether you want to enable breakpad, which is used to generate crash dump for debug use. It is `false` by default. If you set it to `true`, you also need to add `"CapAdd": ["SYS_PTRACE"]` in the `HostConfig` part of container `createOptions`. By default, the crash dump is uploaded to the [RealTimePersonTracking](https://appcenter.ms/orgs/Microsoft-Organization/apps/RealTimePersonTracking/crashes/errors?version=&appBuild=&period=last90Days&status=&errorType=all&sortCol=lastError&sortDir=desc) AppCenter app, if you want the crash dumps to be uploaded to your own AppCenter app, you can override the environment variable `RTPT_APPCENTER_APP_SECRET` with your app's app secret.
+
+See the [camera placement](spatial-analysis-camera-placement.md) guidelines to learn about zone and line configurations.
## Spatial analysis Operation Output
@@ -240,7 +281,7 @@ Sample JSON for an event output by this operation.
"height": 342, "frameId": "1400", "cameraCalibrationInfo": {
- "status": "Complete",
+ "status": "Calibrated",
"cameraHeight": 10.306597709655762, "focalLength": 385.3199462890625, "tiltupAngle": 1.0969393253326416
@@ -269,7 +310,11 @@ Sample JSON for an event output by this operation.
"x": 0.0, "y": 0.0 },
- "metadataType": ""
+ "metadata": {
+ "attributes": {
+ "face_Mask": 0.99
+ }
+ }
}, { "type": "person",
@@ -292,8 +337,12 @@ Sample JSON for an event output by this operation.
"x": 0.0, "y": 0.0 },
- "metadataType": ""
- }
+ "metadata":{
+ "attributes": {
+ "face_noMask": 0.99
+ }
+ }
+ }
], "schemaVersion": "1.0" }
@@ -306,8 +355,6 @@ Sample JSON for an event output by this operation.
| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event| | `properties` | collection| Collection of values| | `trackinId` | string| Unique identifier of the person detected|
-| `status` | string| 'Enter' or 'Exit'|
-| `side` | int| The number of the side of the polygon that the person crossed|
| `zone` | string | The "name" field of the polygon that represents the zone that was crossed| | `trigger` | string| The trigger type is 'event' or 'interval' depending on the value of `trigger` in SPACEANALYTICS_CONFIG|
@@ -319,6 +366,8 @@ Sample JSON for an event output by this operation.
| `type` | string| Type of region| | `points` | collection| Top left and bottom right points when the region type is RECTANGLE | | `confidence` | float| Algorithm confidence|
+| `face_Mask` | float | The attribute confidence value with range (0-1) indicates the detected person is wearing a face mask |
+| `face_noMask` | float | The attribute confidence value with range (0-1) indicates the detected person is **not** wearing a face mask |
| SourceInfo Field Name | Type| Description| |---------|---------|---------|
@@ -328,7 +377,7 @@ Sample JSON for an event output by this operation.
| `height` | int | Video frame height| | `frameId` | int | Frame identifier| | `cameraCallibrationInfo` | collection | Collection of values|
-| `status` | string | Indicates if camera calibration to ground plane is "Complete"|
+| `status` | string | The status of the calibration in the format of `state[;progress description]`. The state can be `Calibrating`, `Recalibrating` (if recalibration is enabled), or `Calibrated`. The progress description part is only valid when it is in `Calibrating` and `Recalibrating` state, which is used to show the progress of current calibration process.|
| `cameraHeight` | float | The height of the camera above the ground in feet. This is inferred from auto-calibration. | | `focalLength` | float | The focal length of the camera in pixels. This is inferred from auto-calibration. | | `tiltUpAngle` | float | The camera tilt angle from vertical. This is inferred from auto-calibration.|
@@ -388,7 +437,11 @@ Sample JSON for detections output by this operation.
] }, "confidence": 0.9005028605461121,
- "metadataType": ""
+ "metadata": {
+ "attributes": {
+ "face_Mask": 0.99
+ }
+ }
} ], "schemaVersion": "1.0"
@@ -412,6 +465,8 @@ Sample JSON for detections output by this operation.
| `type` | string| Type of region| | `points` | collection| Top left and bottom right points when the region type is RECTANGLE | | `confidence` | float| Algorithm confidence|
+| `face_Mask` | float | The attribute confidence value with range (0-1) indicates the detected person is wearing a face mask |
+| `face_noMask` | float | The attribute confidence value with range (0-1) indicates the detected person is **not** wearing a face mask |
| SourceInfo Field Name | Type| Description| |---------|---------|---------|
@@ -423,25 +478,83 @@ Sample JSON for detections output by this operation.
> [!IMPORTANT]
-> The AI model detects a person irrespective of whether the person is facing towards or away from the camera. The AI model doesn't run face detection or recognition and doesn't emit any biometric information.
+> The AI model detects a person irrespective of whether the person is facing towards or away from the camera. The AI model doesn't run face recognition and doesn't emit any biometric information.
### JSON format for cognitiveservices.vision.spatialanalysis-personcrossingpolygon AI Insights
-Sample JSON for detections output by this operation.
+Sample JSON for detections output by this operation with `zonecrossing` type SPACEANALYTICS_CONFIG.
```json { "events": [ { "id": "f095d6fe8cfb4ffaa8c934882fb257a5",
- "type": "personZoneEvent",
+ "type": "personZoneEnterExitEvent",
"detectionIds": [ "afcc2e2a32a6480288e24381f9c5d00e" ], "properties": { "trackingId": "afcc2e2a32a6480288e24381f9c5d00e", "status": "Enter",
- "side": ""
+ "side": "1"
+ },
+ "zone": "queuecamera"
+ }
+ ],
+ "sourceInfo": {
+ "id": "camera_id",
+ "timestamp": "2020-08-24T06:15:09.680Z",
+ "width": 608,
+ "height": 342,
+ "frameId": "428",
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "person",
+ "id": "afcc2e2a32a6480288e24381f9c5d00e",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.8135572734631991,
+ "y": 0.6653949670624315
+ },
+ {
+ "x": 0.9937645761590255,
+ "y": 0.9925406829655519
+ }
+ ]
+ },
+ "confidence": 0.6267998814582825,
+ "metadata": {
+ "attributes": {
+ "face_Mask": 0.99
+ }
+ }
+
+ }
+ ],
+ "schemaVersion": "1.0"
+}
+```
+
+Sample JSON for detections output by this operation with `zonedwelltime` type SPACEANALYTICS_CONFIG.
+
+```json
+{
+ "events": [
+ {
+ "id": "f095d6fe8cfb4ffaa8c934882fb257a5",
+ "type": "personZoneDwellTimeEvent",
+ "detectionIds": [
+ "afcc2e2a32a6480288e24381f9c5d00e"
+ ],
+ "properties": {
+ "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
+ "status": "Exit",
+ "side": "1",
+ "durationMs": 7132.0
}, "zone": "queuecamera" }
@@ -482,11 +595,13 @@ Sample JSON for detections output by this operation.
| Event Field Name | Type| Description| |---------|---------|---------| | `id` | string| Event ID|
-| `type` | string| Event type|
+| `type` | string| Event type. The value can be either _personZoneDwellTimeEvent_ or _personZoneEnterExitEvent_|
| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event| | `properties` | collection| Collection of values| | `trackinId` | string| Unique identifier of the person detected| | `status` | string| Direction of polygon crossings, either 'Enter' or 'Exit'|
+| `side` | int| The number of the side of the polygon that the person crossed. Each side is a numbered edge between the two vertices of the polygon that represents your zone. The edge between the first two vertices of the polygon represent first side|
+| `durationMs` | int | The number of milliseconds that represent the time the person spent in the zone. This field is provided when the event type is _personZoneDwellTimeEvent_|
| `zone` | string | The "name" field of the polygon that represents the zone that was crossed| | Detections Field Name | Type| Description|
@@ -497,6 +612,8 @@ Sample JSON for detections output by this operation.
| `type` | string| Type of region| | `points` | collection| Top left and bottom right points when the region type is RECTANGLE | | `confidence` | float| Algorithm confidence|
+| `face_Mask` | float | The attribute confidence value with range (0-1) indicates the detected person is wearing a face mask |
+| `face_noMask` | float | The attribute confidence value with range (0-1) indicates the detected person is **not** wearing a face mask |
### JSON format for cognitiveservices.vision.spatialanalysis-persondistance AI Insights
@@ -531,7 +648,7 @@ Sample JSON for detections output by this operation.
"height": 342, "frameId": "1199", "cameraCalibrationInfo": {
- "status": "Complete",
+ "status": "Calibrated",
"cameraHeight": 12.9940824508667, "focalLength": 401.2800598144531, "tiltupAngle": 1.057669997215271
@@ -613,7 +730,14 @@ Sample JSON for detections output by this operation.
| `type` | string| Type of region| | `points` | collection| Top left and bottom right points when the region type is RECTANGLE | | `confidence` | float| Algorithm confidence|
-| `centerGroundPoint` | 2 float values| `x`, `y` values with the coordinates of the person's inferred location on the ground in feet. `x` is distance from the camera perpendicular to the camera image plane projected on the ground in feet. `y` is distance from the camera parallel to the image plane projected on the ground in feet.|
+| `centerGroundPoint` | 2 float values| `x`, `y` values with the coordinates of the person's inferred location on the ground in feet. `x` and `y` are coordinates on the floor plane, assuming the floor is level. The camera's location is the origin. |
+
+When calculating `centerGroundPoint`, `x` is the distance from the camera to the person along a line perpendicular to the camera image plane. `y` is the distance from the camera to the person along a line parallel to the camera image plane.
+
+![Example center ground point](./media/spatial-analysis/x-y-chart.png)
+
+In this example, `centerGroundPoint` is `{x: 4, y: 5}`. This means there's a person 4 feet away from the camera and 5 feet to the right, looking at the room top-down.
+ | SourceInfo Field Name | Type| Description| |---------|---------|---------|
@@ -623,7 +747,7 @@ Sample JSON for detections output by this operation.
| `height` | int | Video frame height| | `frameId` | int | Frame identifier| | `cameraCallibrationInfo` | collection | Collection of values|
-| `status` | string | Indicates if camera calibration to ground plane is "Complete"|
+| `status` | string | The status of the calibration in the format of `state[;progress description]`. The state can be `Calibrating`, `Recalibrating` (if recalibration is enabled), or `Calibrated`. The progress description part is only valid when it is in `Calibrating` and `Recalibrating` state, which is used to show the progress of current calibration process.|
| `cameraHeight` | float | The height of the camera above the ground in feet. This is inferred from auto-calibration. | | `focalLength` | float | The focal length of the camera in pixels. This is inferred from auto-calibration. | | `tiltUpAngle` | float | The camera tilt angle from vertical. This is inferred from auto-calibration.|
@@ -639,84 +763,190 @@ You may want to integrate spatial analysis detection or events into your applica
## Deploying spatial analysis operations at scale (multiple cameras)
-In order to get the best performance and utilization of the GPUs, you can deploy any spatial analysis operations on multiple cameras using graph instances. Below is a sample for running the `cognitiveservices.vision.spatialanalysis-personcount` operation on five cameras.
+In order to get the best performance and utilization of the GPUs, you can deploy any spatial analysis operations on multiple cameras using graph instances. Below is a sample for running the `cognitiveservices.vision.spatialanalysis-personcrossingline` operation on fifteen cameras.
```json
- "properties.desired": {
+ "properties.desired": {
"globalSettings": { "PlatformTelemetryEnabled": false, "CustomerTelemetryEnabled": true }, "graphs": {
- "personcount": {
- "operationId": "cognitiveservices.vision.spatialanalysis-personcount",
- "version": 1,
- "enabled": true,
- "sharedNodes": {
- "shared_detector1": {
- "node": "PersonCountGraph.detector",
- "parameters": {
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"batch_size\": 5}",
- }
- }
- },
- "parameters": {
- "VIDEO_DECODE_GPU_INDEX": 0,
- "VIDEO_IS_LIVE": true
- },
- "instances": {
- "1": {
- "sharedNodeMap": {
- "PersonCountGraph/detector": "shared_detector1"
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 1>",
- "VIDEO_SOURCE_ID": "camera 1",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"zone5\",\"polygon\":[[0,0],[1,0],[0,1],[1,1],[0,0]],\"threshold\":50.0, \"events\":[{\"type\":\"count\", \"output_frequency\": 1}]}]}"
- }
- },
- "2": {
- "sharedNodeMap": {
- "PersonCountGraph/detector": "shared_detector1"
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 2>",
- "VIDEO_SOURCE_ID": "camera 2",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "3": {
- "sharedNodeMap": {
- "PersonCountGraph/detector": "shared_detector1"
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 3>",
- "VIDEO_SOURCE_ID": "camera 3",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "4": {
- "sharedNodeMap": {
- "PersonCountGraph/detector": "shared_detector1"
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 4>",
- "VIDEO_SOURCE_ID": "camera 4",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "5": {
- "sharedNodeMap": {
- "PersonCountGraph/detector": "shared_detector1"
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 5>",
- "VIDEO_SOURCE_ID": "camera 5",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- }
- }
+ "personzonelinecrossing": {
+ "operationId": "cognitiveservices.vision.spatialanalysis-personcrossingline",
+ "version": 1,
+ "enabled": true,
+ "sharedNodes": {
+ "shared_detector0": {
+ "node": "PersonCrossingLineGraph.detector",
+ "parameters": {
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"batch_size\": 7, \"do_calibration\": true}",
+ }
+ },
+ "shared_detector1": {
+ "node": "PersonCrossingLineGraph.detector",
+ "parameters": {
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"batch_size\": 8, \"do_calibration\": true}",
+ }
+ }
+ },
+ "parameters": {
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "VIDEO_IS_LIVE": true
+ },
+ "instances": {
+ "1": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 1>",
+ "VIDEO_SOURCE_ID": "camera 1",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0,0],[1,0],[0,1],[1,1],[0,0]]}]}"
+ }
+ },
+ "2": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 2>",
+ "VIDEO_SOURCE_ID": "camera 2",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "3": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 3>",
+ "VIDEO_SOURCE_ID": "camera 3",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "4": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 4>",
+ "VIDEO_SOURCE_ID": "camera 4",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "5": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 5>",
+ "VIDEO_SOURCE_ID": "camera 5",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "6": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 6>",
+ "VIDEO_SOURCE_ID": "camera 6",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "7": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 7>",
+ "VIDEO_SOURCE_ID": "camera 7",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "8": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector1",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 8>",
+ "VIDEO_SOURCE_ID": "camera 8",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "9": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector1",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 9>",
+ "VIDEO_SOURCE_ID": "camera 9",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "10": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector1",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 10>",
+ "VIDEO_SOURCE_ID": "camera 10",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "11": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector1",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 11>",
+ "VIDEO_SOURCE_ID": "camera 11",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "12": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector1",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 12>",
+ "VIDEO_SOURCE_ID": "camera 12",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "13": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector1",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 13>",
+ "VIDEO_SOURCE_ID": "camera 13",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "14": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector1",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 14>",
+ "VIDEO_SOURCE_ID": "camera 14",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "15": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector1",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 15>",
+ "VIDEO_SOURCE_ID": "camera 15",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ }
}
+ },
} } ```
@@ -729,4 +959,4 @@ In order to get the best performance and utilization of the GPUs, you can deploy
* [Deploy a People Counting web application](spatial-analysis-web-app.md) * [Logging and troubleshooting](spatial-analysis-logging.md) * [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
\ No newline at end of file
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/spatial-analysis-web-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-web-app.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: computer-vision ms.topic: conceptual
-ms.date: 11/06/2020
+ms.date: 01/12/2021
ms.author: aahi ---
@@ -58,12 +58,12 @@ az iot hub device-identity create --hub-name "<IoT Hub Name>" --device-id "<Edge
### Deploy the container on Azure IoT Edge on the host computer
-Deploy the spatial analysis container as an IoT Module on the host computer, using the Azure CLI. The deployment process requires a deployment manifest file which outlines the required containers, variables, and configurations for your deployment. You can find a sample [Azure Stack Edge specific deployment manifest](https://github.com/Azure-Samples/cognitive-services-rest-api-samples/) as well as a [non-Azure Stack Edge specific deployment manifest](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) on GitHub, which include a basic deployment configuration for the *spatial-analysis* container.
+Deploy the spatial analysis container as an IoT Module on the host computer, using the Azure CLI. The deployment process requires a deployment manifest file which outlines the required containers, variables, and configurations for your deployment. You can find a sample [Azure Stack Edge specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2142179), [non-Azure Stack Edge specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2152189), and [Azure VM with GPU specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2152189) on GitHub, which include a basic deployment configuration for the *spatial-analysis* container.
Alternatively, you can use the Azure IoT extensions for Visual Studio Code to perform operations with your IoT hub. Go to [Deploy Azure IoT Edge Modules from Visual Studio Code](../../iot-edge/how-to-deploy-modules-vscode.md) to learn more. > [!NOTE]
-> The *spatial-analysis-telegraf* and *spatial-analysis-diagnostics* containers are optional. You may decide to remove them from the *DeploymentManifest.json* file. For more information see the [telemetry and troubleshooting](./spatial-analysis-logging.md) article. You can find two sample *DeploymentManifest.json* files on Github, for either a [Azure Stack Edge devices](https://go.microsoft.com/fwlink/?linkid=2142179) or another [Desktop machine](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json)
+> The *spatial-analysis-telegraf* and *spatial-analysis-diagnostics* containers are optional. You may decide to remove them from the *DeploymentManifest.json* file. For more information see the [telemetry and troubleshooting](./spatial-analysis-logging.md) article. You can find three sample *DeploymentManifest.json* files on GitHub, for [Azure Stack Edge devices](https://go.microsoft.com/fwlink/?linkid=2142179), a [Desktop machine](https://go.microsoft.com/fwlink/?linkid=2152189), or an [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189)
### Set environment variables
@@ -181,7 +181,7 @@ Wait for setup to complete, and navigate to your resource in the Azure portal. G
* `IotHubConnectionString` ΓÇô The connection string to your Azure IoT Hub, this can be retrieved from the keys section of your Azure IoT Hub resource ![Configure Parameters](./media/spatial-analysis/solution-app-config-page.png)
-Once these 2 settings are added, click **Save**. Then click **Authentication/Authorization** in the left navigation menu, and update it with the desired level of authentication. We recommend Azure Active Director (Azure AD) express.
+Once these 2 settings are added, click **Save**. Then click **Authentication/Authorization** in the left navigation menu, and update it with the desired level of authentication. We recommend Azure Active Directory (Azure AD) express.
### Test the app
@@ -190,11 +190,11 @@ Go to the Azure Web App and verify the deployment was successful, and the web ap
![Test the deployment](./media/spatial-analysis/solution-app-output.png) ## Get the PersonCount source code
-If you'd like to view or modify the source code for this application, you can find it [on Github](https://github.com/Azure-Samples/cognitive-services-spatial-analysis).
+If you'd like to view or modify the source code for this application, you can find it [on GitHub](https://github.com/Azure-Samples/cognitive-services-spatial-analysis).
## Next steps * [Configure spatial analysis operations](./spatial-analysis-operations.md) * [Logging and troubleshooting](spatial-analysis-logging.md) * [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
\ No newline at end of file
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/whats-new.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: computer-vision ms.topic: overview
-ms.date: 12/15/2020
+ms.date: 01/13/2021
ms.author: pafarley ---
@@ -16,6 +16,17 @@ ms.author: pafarley
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## January 2021
+
+### Spatial analysis container update
+
+A new version of the [spatial analysis container](spatial-analysis-container.md) has been released with a new feature set. This Docker container lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments.
+
+* [Spatial analysis operations](spatial-analysis-operations.md) can be now configured to detect if a person is wearing a protective face covering such as a mask.
+ * A mask classifier can be enabled for the `personcount`, `personcrossingline` and `personcrossingpolygon` operations by configuring the `ENABLE_FACE_MASK_CLASSIFIER` parameter.
+ * The attributes `face_mask` and `face_noMask` will be returned as metadata with confidence score for each person detected in the video stream
++ ## October 2020 ### Computer Vision API v3.1 GA
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-how-to-publish-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-publish-app.md
@@ -3,13 +3,14 @@ title: Publish app - LUIS
titleSuffix: Azure Cognitive Services description: When you finish building and testing your active LUIS app, make it available to your client application by publishing it to the endpoint. services: cognitive-services-
+author: aahill
manager: nitinme
+ms.author: aahi
ms.custom: seodec18 ms.service: cognitive-services ms.subservice: language-understanding ms.topic: how-to
-ms.date: 05/17/2020
+ms.date: 01/12/2021
---
@@ -52,7 +53,7 @@ For example, for an app created on [www.luis.ai](https://www.luis.ai), if you cr
After you select the slot, configure the publish settings for: * Sentiment analysis
-* [Spelling correction](luis-tutorial-bing-spellcheck.md) - v2 prediction endpoint only
+* [Spelling correction](luis-tutorial-bing-spellcheck.md)
* Speech priming After you publish, these settings are available for review from the **Manage** section's **Publish settings** page. You can change the settings with every publish. If you cancel a publish, any changes you made during the publish are also canceled.
@@ -77,7 +78,32 @@ For more information about the JSON endpoint response with sentiment analysis, s
## Spelling correction
-[!INCLUDE [Not supported in V3 API prediction endpoint](./includes/v2-support-only.md)]
+The V3 prediction API now supports the Bing Spellcheck API. You can add spell checking to your application by including the key to your Bing search resource in the header of your requests. You can use an existing Bing resource if you already own one, or [create a new one](https://portal.azure.com/#create/Microsoft.BingSearch) to use this feature.
+
+|Header Key|Header Value|
+|--|--|
+|`mkt-bing-spell-check-key`|Keys found in **Keys and Endpoint** blade of your resource|
+
+Prediction output example for a misspelled query:
+
+```json
+{
+ "query": "bouk me a fliht to kayro",
+ "prediction": {
+ "alteredQuery": "book me a flight to cairo",
+ "topIntent": "book a flight",
+ "intents": {
+ "book a flight": {
+ "score": 0.9480589
+ }
+ "None": {
+ "score": 0.0332136229
+ }
+ },
+ "entities": {}
+ }
+}
+```
Corrections to spelling are made before the LUIS user utterance prediction. You can see any changes to the original utterance, including spelling, in the response.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-tutorial-bing-spellcheck https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-tutorial-bing-spellcheck.md
@@ -9,65 +9,48 @@ ms.custom: seodec18
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: how-to
-ms.date: 11/19/2019
+ms.date: 01/12/2021
---
-# Correct misspelled words with Bing Spell Check
+# Correct misspelled words with Bing Search Resource
-You can integrate your LUIS app with [Bing Spell Check API V7](https://azure.microsoft.com/services/cognitive-services/spell-check/) to correct misspelled words in utterances before LUIS predicts the score and entities of the utterance.
-
-[!INCLUDE [Not supported in V3 API prediction endpoint](./includes/v2-support-only.md)]
+You can integrate your LUIS app with [Bing Search](https://ms.portal.azure.com/#create/Microsoft.BingSearch) to correct misspelled words in utterances before LUIS predicts the score and entities of the utterance.
## Create Endpoint key
-To create a Bing Spell Check resource in the Azure portal, follow these instructions:
+To create a Bing Search resource in the Azure portal, follow these instructions:
1. Log in to the [Azure portal](https://portal.azure.com). 2. Select **Create a resource** in the top left corner.
-3. In the search box, enter `Bing Spell Check API V7`.
-
- ![Search for Bing Spell Check API V7](./media/luis-tutorial-bing-spellcheck/portal-search.png)
-
-4. Select the service.
-
-5. An information panel appears to the right containing information including the Legal Notice. Select **Create** to begin the subscription creation process.
+3. In the search box, enter `Bing Search V7` and select the service.
-6. In the next panel, enter your service settings. Wait for service creation process to finish.
+4. An information panel appears to the right containing information including the Legal Notice. Select **Create** to begin the subscription creation process.
- ![Enter service settings](./media/luis-tutorial-bing-spellcheck/subscription-settings.png)
+ :::image type="content" source="./media/luis-tutorial-bing-spellcheck/bing-search-resource-portal.png" alt-text="Bing Spell Check API V7 resource":::
-7. Select **All resources** under the **Favorites** title on the left side navigation.
+5. In the next panel, enter your service settings. Wait for service creation process to finish.
-8. Select the new service. Its type is **Cognitive Services** and the location is **global**.
+6. After the resource is created, go to the **Keys and Endpoint** blade on the left.
-9. In the main panel, select **Keys** to see your new keys.
+7. Copy one of the keys to be added to the header of your prediction request. You will only need one of the two keys.
- ![Grab keys](./media/luis-tutorial-bing-spellcheck/grab-keys.png)
-
-10. Copy the first key. You only need one of the two keys.
+8. Add the key to `mkt-bing-spell-check-key` in the prediction request header.
<!-- ## Using the key in LUIS test panel There are two places in LUIS to use the key. The first is in the [test panel](luis-interactive-test.md#view-bing-spell-check-corrections-in-test-panel). The key isn't saved into LUIS but instead is a session variable. You need to set the key every time you want the test panel to apply the Bing Spell Check API v7 service to the utterance. See [instructions](luis-interactive-test.md#view-bing-spell-check-corrections-in-test-panel) in the test panel for setting the key. --> ## Adding the key to the endpoint URL
-The endpoint query needs the key passed in the query string parameters for each query you want to apply spelling correction. You may have a chatbot that calls LUIS or you may call the LUIS endpoint API directly. Regardless of how the endpoint is called, each and every call must include the required information for spelling corrections to work properly.
-
-The endpoint URL has several values that need to be passed correctly. The Bing Spell Check API v7 key is just another one of these. You must set the **spellCheck** parameter to true and you must set the value of **bing-spell-check-subscription-key** to the key value:
+For each query you want to apply spelling correction on, the endpoint query needs the Bing Spellcheck resource key passed in the query header parameter. You may have a chatbot that calls LUIS or you may call the LUIS endpoint API directly. Regardless of how the endpoint is called, each and every call must include the required information in the header's request for spelling corrections to work properly. You must set the value with **mkt-bing-spell-check-key** to the key value.
-`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appID}?subscription-key={luisKey}&spellCheck=true&bing-spell-check-subscription-key={bingKey}&verbose=true&timezoneOffset=0&q={utterance}`
## Send misspelled utterance to LUIS
-1. In a web browser, copy the preceding string and replace the `region`, `appId`, `luisKey`, and `bingKey` with your own values. Make sure to use the endpoint region, if it is different from your publishing [region](luis-reference-regions.md).
-
-2. Add a misspelled utterance such as "How far is the mountainn?". In English, `mountain`, with one `n`, is the correct spelling.
-
-3. Select enter to send the query to LUIS.
+1. Add a misspelled utterance in the prediction query you will be sending such as "How far is the mountainn?". In English, `mountain`, with one `n`, is the correct spelling.
-4. LUIS responds with a JSON result for `How far is the mountain?`. If Bing Spell Check API v7 detects a misspelling, the `query` field in the LUIS app's JSON response contains the original query, and the `alteredQuery` field contains the corrected query sent to LUIS.
+2. LUIS responds with a JSON result for `How far is the mountain?`. If Bing Spell Check API v7 detects a misspelling, the `query` field in the LUIS app's JSON response contains the original query, and the `alteredQuery` field contains the corrected query sent to LUIS.
```json {
@@ -83,15 +66,13 @@ The endpoint URL has several values that need to be passed correctly. The Bing S
## Ignore spelling mistakes
-If you don't want to use the Bing Spell Check API v7 service, you need to add the correct and incorrect spelling.
+If you don't want to use the Bing Search API v7 service, you need to add the correct and incorrect spelling.
Two solutions are: * Label example utterances that have the all the different spellings so that LUIS can learn proper spelling as well as typos. This option requires more labeling effort than using a spell checker. * Create a phrase list with all variations of the word. With this solution, you do not need to label the word variations in the example utterances.
-## Publishing page
-The [publishing](luis-how-to-publish-app.md) page has an **Enable Bing spell checker** checkbox. This is a convenience to create the key and understand how the endpoint URL changes. You still have to use the correct endpoint parameters in order to have spelling corrected for each utterance.
> [!div class="nextstepaction"]
-> [Learn more about example utterances](./luis-how-to-add-entities.md)
\ No newline at end of file
+> [Learn more about example utterances](./luis-how-to-add-entities.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/whats-new.md
@@ -4,7 +4,7 @@ description: This article is regularly updated with news about the Azure Cogniti
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: overview
-ms.date: 01/05/2021
+ms.date: 01/12/2021
--- # What's new in Language Understanding
@@ -13,10 +13,14 @@ Learn what's new in the service. These items include release notes, videos, blog
## Release notes
+### January 2021
+
+* The V3 prediction API now supports the [Bing Spellcheck API](luis-how-to-publish-app.md#spelling-correction).
+ ### December 2020 * All LUIS users are required to [migrate to a LUIS authoring resource](luis-migration-authoring.md)
-* New [evaluation endpoints](luis-how-to-batch-test.md#batch-testing-using-the-rest-api) which allow you to submit batch tests usting the REST API, and get accuracy results for your intents and entities. Available starting with the v3.0-preview LUIS Endpoint.
+* New [evaluation endpoints](luis-how-to-batch-test.md#batch-testing-using-the-rest-api) that allow you to submit batch tests using the REST API, and get accuracy results for your intents and entities. Available starting with the v3.0-preview LUIS Endpoint.
### June 2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/How-To/improve-knowledge-base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/improve-knowledge-base.md
@@ -22,7 +22,7 @@ In order to see suggested questions, you must [turn on active learning](../conce
## View suggested questions
-1. In order to see the suggested questions, on the **Edit** knowledge base page, select **View Options**, then select **Show active learning suggestions**.
+1. In order to see the suggested questions, on the **Edit** knowledge base page, select **View Options**, then select **Show active learning suggestions**. This option will be disabled if there are no suggestions present for any of the question and answer pairs.
[![On the Edit section of the portal, select Show Suggestions in order to see the active learning's new question alternatives.](../media/improve-knowledge-base/show-suggestions-button.png)](../media/improve-knowledge-base/show-suggestions-button.png#lightbox)
@@ -340,4 +340,4 @@ For best practices when using active learning, see [Best practices](../Concepts/
## Next steps > [!div class="nextstepaction"]
-> [Use metadata with GenerateAnswer API](metadata-generateanswer-usage.md)
\ No newline at end of file
+> [Use metadata with GenerateAnswer API](metadata-generateanswer-usage.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/limits.md
@@ -94,6 +94,7 @@ These represent the limits for each create knowledge base action; that is, click
* Recommended maximum number of alternate questions per answer: 300 * Maximum number of URLs: 10 * Maximum number of files: 10
+* Maximum number of QnAs permitted per call: 1000
## Update Knowledge base call limits These represent the limits for each update action; that is, clicking *Save and train* or calling the UpdateKnowledgeBase API.
@@ -101,6 +102,7 @@ These represent the limits for each update action; that is, clicking *Save and t
* Recommended maximum number of alternate questions added or deleted: 300 * Maximum number of metadata fields added or deleted: 10 * Maximum number of URLs that can be refreshed: 5
+* Maximum number of QnAs permitted per call: 1000
## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-automatic-language-detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-automatic-language-detection.md
@@ -249,11 +249,11 @@ var autoDetectConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromSourceLangua
::: zone-end ::: zone pivot="programming-language-python"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_synthesis_sample.py#L434) on GitHub for automatic language detection
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L458) on GitHub for automatic language detection
::: zone-end ::: zone pivot="programming-language-objectivec"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L494) on GitHub for automatic language detection
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L525) on GitHub for automatic language detection
::: zone-end * [Speech SDK reference documentation](speech-sdk.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-pronunciation-assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
@@ -8,10 +8,10 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: speech-service ms.topic: conceptual
-ms.date: 09/29/2020
+ms.date: 01/12/2021
ms.author: yulili ms.custom: references_regions
-zone_pivot_groups: programming-languages-set-nineteen
+zone_pivot_groups: programming-languages-speech-services-nomore-variant
--- # Pronunciation assessment
@@ -123,6 +123,26 @@ pronunciation_score = pronunciation_assessment_result.pronunciation_score
::: zone-end
+::: zone pivot="programming-language-javascript"
+
+```Javascript
+var pronunciationAssessmentConfig = new SpeechSDK.PronunciationAssessmentConfig("reference text",
+ PronunciationAssessmentGradingSystem.HundredMark,
+ PronunciationAssessmentGranularity.Word, true);
+var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig);
+// apply the pronunciation assessment configuration to the speech recognizer
+pronunciationAssessmentConfig.applyTo(speechRecognizer);
+
+speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult) => {
+ var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(result);
+ var pronunciationScore = pronResult.pronunciationScore;
+ var wordLevelResult = pronResult.detailResult.Words;
+},
+{});
+```
+
+::: zone-end
+ ::: zone pivot="programming-language-objectivec" ```Objective-C
@@ -171,26 +191,26 @@ This table lists the result parameters of pronunciation assessment.
## Next steps
-<!-- TODO: update the sample links after release -->
+<!-- TODO: update JavaScript sample links after release -->
-<!-- ::: zone pivot="programming-language-csharp"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L741) on GitHub for automatic language detection
+::: zone pivot="programming-language-csharp"
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L949) on GitHub for pronunciation assessment.
::: zone-end ::: zone pivot="programming-language-cpp"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L507) on GitHub for automatic language detection
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L633) on GitHub for pronunciation assessment.
::: zone-end ::: zone pivot="programming-language-java"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java#L521) on GitHub for automatic language detection
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java#L697) on GitHub for pronunciation assessment.
::: zone-end ::: zone pivot="programming-language-python"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_synthesis_sample.py#L434) on GitHub for automatic language detection
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L576) on GitHub for pronunciation assessment.
::: zone-end ::: zone pivot="programming-language-objectivec"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L494) on GitHub for automatic language detection
-::: zone-end -->
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L642) on GitHub for pronunciation assessment.
+::: zone-end
* [Speech SDK reference documentation](speech-sdk.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
@@ -31,94 +31,94 @@ To get pronunciation bits:
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronunciation Datasets" -> Click on Import -> Locale: the list of locales there correspond to the supported locales -->
-| Language | Locale (BCP-47) | Customizations |
-|------------------------------------|--------|--------------------------------------------------|
-| Arabic (Bahrain), modern standard | `ar-BH` | Language model |
-| Arabic (Egypt) | `ar-EG` | Language model |
-| Arabic (Iraq) | `ar-IQ` | Language model |
-| Arabic (Israel) | `ar-IL` | Language model |
-| Arabic (Jordan) | `ar-JO` | Language model |
-| Arabic (Kuwait) | `ar-KW` | Language model |
-| Arabic (Lebanon) | `ar-LB` | Language model |
-| Arabic (Oman) | `ar-OM` | Language model |
-| Arabic (Qatar) | `ar-QA` | Language model |
-| Arabic (Saudi Arabia) | `ar-SA` | Language model |
-| Arabic (State of Palestine) | `ar-PS` | Language model |
-| Arabic (Syria) | `ar-SY` | Language model |
-| Arabic (United Arab Emirates) | `ar-AE` | Language model |
-| Bulgarian (Bulgaria) | `bg-BG` | Language model |
-| Catalan (Spain) | `ca-ES` | Language model |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Acoustic model<br>Language model |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Acoustic model<br>Language model |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Acoustic model<br>Language model |
-| Croatian (Croatia) | `hr-HR` | Language model |
-| Czech (Czech Republic) | `cs-CZ` | Language Model |
-| Danish (Denmark) | `da-DK` | Language model |
-| Dutch (Netherlands) | `nl-NL` | Language model |
-| English (Australia) | `en-AU` | Acoustic model<br>Language model |
-| English (Canada) | `en-CA` | Acoustic model<br>Language model |
-| English (Hong Kong) | `en-HK` | Language Model |
-| English (India) | `en-IN` | Acoustic model<br>Language model |
-| English (Ireland) | `en-IE` | Language Model |
-| English (New Zealand) | `en-NZ` | Acoustic model<br>Language model |
-| English (Nigeria) | `en-NG` | Language Model |
-| English (Philippines) | `en-PH` | Language Model |
-| English (Singapore) | `en-SG` | Language Model |
-| English (South Africa) | `en-ZA` | Language Model |
-| English (United Kingdom) | `en-GB` | Acoustic model<br>Language model<br>Pronunciation|
-| English (United States) | `en-US` | Acoustic model<br>Language model<br>Pronunciation|
-| Estonian(Estonia) | `et-EE` | Language Model |
-| Finnish (Finland) | `fi-FI` | Language model |
-| French (Canada) | `fr-CA` | Acoustic model<br>Language model |
-| French (France) | `fr-FR` | Acoustic model<br>Language model<br>Pronunciation|
-| German (Germany) | `de-DE` | Acoustic model<br>Language model<br>Pronunciation|
-| Greek (Greece) | `el-GR` | Language model |
-| Gujarati (Indian) | `gu-IN` | Language model |
-| Hindi (India) | `hi-IN` | Acoustic model<br>Language model |
-| Hungarian (Hungary) | `hu-HU` | Language Model |
-| Irish(Ireland) | `ga-IE` | Language model |
-| Italian (Italy) | `it-IT` | Acoustic model<br>Language model<br>Pronunciation|
-| Japanese (Japan) | `ja-JP` | Acoustic model<br>Language model |
-| Korean (Korea) | `ko-KR` | Acoustic model<br>Language model |
-| Latvian (Latvia) | `lv-LV` | Language model |
-| Lithuanian (Lithuania) | `lt-LT` | Language model |
-| Maltese(Malta) | `mt-MT` | Language model |
-| Marathi (India) | `mr-IN` | Language model |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Language model |
-| Polish (Poland) | `pl-PL` | Language model |
-| Portuguese (Brazil) | `pt-BR` | Acoustic model<br>Language model<br>Pronunciation|
-| Portuguese (Portugal) | `pt-PT` | Language model |
-| Romanian (Romania) | `ro-RO` | Language model |
-| Russian (Russia) | `ru-RU` | Acoustic model<br>Language model |
-| Slovak (Slovakia) | `sk-SK` | Language model |
-| Slovenian (Slovenia) | `sl-SI` | Language model |
-| Spanish (Argentina) | `es-AR` | Language Model |
-| Spanish (Bolivia) | `es-BO` | Language Model |
-| Spanish (Chile) | `es-CL` | Language Model |
-| Spanish (Colombia) | `es-CO` | Language Model |
-| Spanish (Costa Rica) | `es-CR` | Language Model |
-| Spanish (Cuba) | `es-CU` | Language Model |
-| Spanish (Dominican Republic) | `es-DO` | Language Model |
-| Spanish (Ecuador) | `es-EC` | Language Model |
-| Spanish (El Salvador) | `es-SV` | Language Model |
-| Spanish (Equatorial Guinea) | `es-GQ` | Language Model |
-| Spanish (Guatemala) | `es-GT` | Language Model |
-| Spanish (Honduras) | `es-HN` | Language Model |
-| Spanish (Mexico) | `es-MX` | Acoustic model<br>Language model |
-| Spanish (Nicaragua) | `es-NI` | Language Model |
-| Spanish (Panama) | `es-PA` | Language Model |
-| Spanish (Paraguay) | `es-PY` | Language Model |
-| Spanish (Peru) | `es-PE` | Language Model |
-| Spanish (Puerto Rico) | `es-PR` | Language Model |
-| Spanish (Spain) | `es-ES` | Acoustic model<br>Language model |
-| Spanish (Uruguay) | `es-UY` | Language Model |
-| Spanish (USA) | `es-US` | Language Model |
-| Spanish (Venezuela) | `es-VE` | Language Model |
-| Swedish (Sweden) | `sv-SE` | Language model |
-| Tamil (India) | `ta-IN` | Language model |
-| Telugu (India) | `te-IN` | Language model |
-| Thai (Thailand) | `th-TH` | Language model |
-| Turkish (Turkey) | `tr-TR` | Language model |
+| Language | Locale (BCP-47) | Customizations | [Automatic language detection?](how-to-automatic-language-detection.md) |
+|------------------------------------|--------|---------------------------------------------------|-------------------------------|
+| Arabic (Bahrain), modern standard | `ar-BH` | Language model | Yes |
+| Arabic (Egypt) | `ar-EG` | Language model | Yes |
+| Arabic (Iraq) | `ar-IQ` | Language model | |
+| Arabic (Israel) | `ar-IL` | Language model | |
+| Arabic (Jordan) | `ar-JO` | Language model | |
+| Arabic (Kuwait) | `ar-KW` | Language model | |
+| Arabic (Lebanon) | `ar-LB` | Language model | |
+| Arabic (Oman) | `ar-OM` | Language model | |
+| Arabic (Qatar) | `ar-QA` | Language model | |
+| Arabic (Saudi Arabia) | `ar-SA` | Language model | Yes |
+| Arabic (State of Palestine) | `ar-PS` | Language model | |
+| Arabic (Syria) | `ar-SY` | Language model | Yes |
+| Arabic (United Arab Emirates) | `ar-AE` | Language model | |
+| Bulgarian (Bulgaria) | `bg-BG` | Language model | |
+| Catalan (Spain) | `ca-ES` | Language model | Yes |
+| Chinese (Cantonese, Traditional) | `zh-HK` | Acoustic model<br>Language model | Yes |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Acoustic model<br>Language model | Yes |
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Acoustic model<br>Language model | Yes |
+| Croatian (Croatia) | `hr-HR` | Language model | |
+| Czech (Czech Republic) | `cs-CZ` | Language Model | |
+| Danish (Denmark) | `da-DK` | Language model | Yes |
+| Dutch (Netherlands) | `nl-NL` | Language model | Yes |
+| English (Australia) | `en-AU` | Acoustic model<br>Language model | Yes |
+| English (Canada) | `en-CA` | Acoustic model<br>Language model | Yes |
+| English (Hong Kong) | `en-HK` | Language Model | |
+| English (India) | `en-IN` | Acoustic model<br>Language model | Yes |
+| English (Ireland) | `en-IE` | Language Model | |
+| English (New Zealand) | `en-NZ` | Acoustic model<br>Language model | Yes |
+| English (Nigeria) | `en-NG` | Language Model | |
+| English (Philippines) | `en-PH` | Language Model | |
+| English (Singapore) | `en-SG` | Language Model | |
+| English (South Africa) | `en-ZA` | Language Model | |
+| English (United Kingdom) | `en-GB` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| English (United States) | `en-US` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| Estonian(Estonia) | `et-EE` | Language Model | |
+| Finnish (Finland) | `fi-FI` | Language model | Yes |
+| French (Canada) | `fr-CA` | Acoustic model<br>Language model | Yes |
+| French (France) | `fr-FR` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| German (Germany) | `de-DE` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| Greek (Greece) | `el-GR` | Language model | |
+| Gujarati (Indian) | `gu-IN` | Language model | |
+| Hindi (India) | `hi-IN` | Acoustic model<br>Language model | Yes |
+| Hungarian (Hungary) | `hu-HU` | Language Model | |
+| Irish(Ireland) | `ga-IE` | Language model | |
+| Italian (Italy) | `it-IT` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| Japanese (Japan) | `ja-JP` | Acoustic model<br>Language model | Yes |
+| Korean (Korea) | `ko-KR` | Acoustic model<br>Language model | Yes |
+| Latvian (Latvia) | `lv-LV` | Language model | |
+| Lithuanian (Lithuania) | `lt-LT` | Language model | |
+| Maltese(Malta) | `mt-MT` | Language model | |
+| Marathi (India) | `mr-IN` | Language model | |
+| Norwegian (Bokmål, Norway) | `nb-NO` | Language model | Yes |
+| Polish (Poland) | `pl-PL` | Language model | Yes |
+| Portuguese (Brazil) | `pt-BR` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| Portuguese (Portugal) | `pt-PT` | Language model | Yes |
+| Romanian (Romania) | `ro-RO` | Language model | |
+| Russian (Russia) | `ru-RU` | Acoustic model<br>Language model | Yes |
+| Slovak (Slovakia) | `sk-SK` | Language model | |
+| Slovenian (Slovenia) | `sl-SI` | Language model | |
+| Spanish (Argentina) | `es-AR` | Language Model | |
+| Spanish (Bolivia) | `es-BO` | Language Model | |
+| Spanish (Chile) | `es-CL` | Language Model | |
+| Spanish (Colombia) | `es-CO` | Language Model | |
+| Spanish (Costa Rica) | `es-CR` | Language Model | |
+| Spanish (Cuba) | `es-CU` | Language Model | |
+| Spanish (Dominican Republic) | `es-DO` | Language Model | |
+| Spanish (Ecuador) | `es-EC` | Language Model | |
+| Spanish (El Salvador) | `es-SV` | Language Model | |
+| Spanish (Equatorial Guinea) | `es-GQ` | Language Model | |
+| Spanish (Guatemala) | `es-GT` | Language Model | |
+| Spanish (Honduras) | `es-HN` | Language Model | |
+| Spanish (Mexico) | `es-MX` | Acoustic model<br>Language model | Yes |
+| Spanish (Nicaragua) | `es-NI` | Language Model | |
+| Spanish (Panama) | `es-PA` | Language Model | |
+| Spanish (Paraguay) | `es-PY` | Language Model | |
+| Spanish (Peru) | `es-PE` | Language Model | |
+| Spanish (Puerto Rico) | `es-PR` | Language Model | |
+| Spanish (Spain) | `es-ES` | Acoustic model<br>Language model | Yes |
+| Spanish (Uruguay) | `es-UY` | Language Model | |
+| Spanish (USA) | `es-US` | Language Model | |
+| Spanish (Venezuela) | `es-VE` | Language Model | |
+| Swedish (Sweden) | `sv-SE` | Language model | Yes |
+| Tamil (India) | `ta-IN` | Language model | |
+| Telugu (India) | `te-IN` | Language model | |
+| Thai (Thailand) | `th-TH` | Language model | Yes |
+| Turkish (Turkey) | `tr-TR` | Language model | |
## Text-to-speech
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/speech-services-private-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-private-link.md
@@ -1,7 +1,7 @@
---
-title: How to use private endpoints with Speech service
+title: How to use private endpoints with Speech Services
titleSuffix: Azure Cognitive Services
-description: Learn how to use Speech service with private endpoints provided by Azure Private Link
+description: Learn how to use Speech Services with private endpoints provided by Azure Private Link
services: cognitive-services author: alexeyo26 manager: nitinme
@@ -12,93 +12,84 @@ ms.date: 12/15/2020
ms.author: alexeyo ---
-# Use Speech service through a private endpoint
+# Use Speech Services through a private endpoint
-[Azure Private Link](../../private-link/private-link-overview.md) lets you connect to services in Azure using a [private endpoint](../../private-link/private-endpoint-overview.md).
-A private endpoint is a private IP address only accessible within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet.
+[Azure Private Link](../../private-link/private-link-overview.md) lets you connect to services in Azure by using a [private endpoint](../../private-link/private-endpoint-overview.md). A private endpoint is a private IP address that's accessible only within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet.
-This article explains how to set up and use Private Link and private endpoints with Azure Cognitive Speech Services.
+This article explains how to set up and use Private Link and private endpoints with Speech Services in Azure Cognitive Services.
> [!NOTE]
-> This article explains the specifics of setting up and using Private Link with Azure Cognitive Speech Services.
-> Before proceeding, review how to [use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
+> Before you proceed, review [how to use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
-Perform the following tasks to use a Speech service through a private endpoint:
-
-1. [Create Speech resource custom domain name](#create-a-custom-domain-name)
-2. [Create and configure private endpoint(s)](#enable-private-endpoints)
-3. [Adjust existing applications and solutions](#use-speech-resource-with-custom-domain-name-and-private-endpoint-enabled)
-
-To remove private endpoints later, but still use the Speech resource, you will perform the tasks found in [this section](#use-speech-resource-with-custom-domain-name-without-private-endpoints).
+This article also describes [how to remove private endpoints later, but still use the Speech resource](#use-a-speech-resource-with-a-custom-domain-name-and-without-private-endpoints).
## Create a custom domain name
-Private endpoints require a [Cognitive Services custom subdomain name](../cognitive-services-custom-subdomains.md). Follow the instructions below to create one for your Speech resource.
+Private endpoints require a [custom subdomain name for Cognitive Services](../cognitive-services-custom-subdomains.md). Use the following instructions to create one for your Speech resource.
> [!WARNING]
-> A Speech resource with custom domain name enabled uses a different way to interact with the Speech service.
-> You probably must adjust your application code for both [private endpoint enabled](#use-speech-resource-with-custom-domain-name-and-private-endpoint-enabled) and [**not** private endpoint enabled](#use-speech-resource-with-custom-domain-name-without-private-endpoints) scenarios.
+> A Speech resource with a custom domain name enabled uses a different way to interact with Speech Services. You might have to adjust your application code for both of these scenarios: [private endpoint enabled](#use-a-speech-resource-with-a-custom-domain-name-and-a-private-endpoint-enabled) and [*not* private endpoint enabled](#use-a-speech-resource-with-a-custom-domain-name-and-without-private-endpoints).
>
-> When you enable a custom domain name, the operation is [**not reversible**](../cognitive-services-custom-subdomains.md#can-i-change-a-custom-domain-name). The only way to go back to the [regional name](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) is to create a new Speech resource.
+> When you enable a custom domain name, the operation is [not reversible](../cognitive-services-custom-subdomains.md#can-i-change-a-custom-domain-name). The only way to go back to the [regional name](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) is to create a new Speech resource.
>
-> If your Speech resource has a lot of associated custom models and projects created via [Speech Studio](https://speech.microsoft.com/) we **strongly** recommend trying the configuration with a test resource before modifying the resource used in production.
+> If your Speech resource has a lot of associated custom models and projects created via [Speech Studio](https://speech.microsoft.com/), we strongly recommend trying the configuration with a test resource before you modify the resource used in production.
# [Azure portal](#tab/portal)
-To create a custom domain name using Azure portal, follow these steps:
+To create a custom domain name by using the Azure portal, follow these steps:
-1. Go to [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the required Speech Resource.
-1. In the **Resource Management** group in the left navigation pane, click **Networking**.
-1. In **Firewalls and virtual networks** tab, click **Generate Custom Domain Name**. A new right panel appears with instructions to create a unique custom subdomain for your resource.
-1. In the Generate Custom Domain Name panel, enter a custom domain name portion. Your full custom domain will look like:
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the required Speech resource.
+1. In the **Resource Management** group on the left pane, select **Networking**.
+1. On the **Firewalls and virtual networks** tab, select **Generate Custom Domain Name**. A new right panel appears with instructions to create a unique custom subdomain for your resource.
+1. In the **Generate Custom Domain Name** panel, enter a custom domain name. Your full custom domain will look like:
`https://{your custom name}.cognitiveservices.azure.com`.
- **After you create a custom domain name, it _cannot_ be changed! Re-read the warning alert above.** After you've entered your custom domain name, click **Save**.
-1. After the operation completes, in the **Resource management** group, click **Keys and Endpoint**. Confirm the new endpoint name of your resource starts this way:
-
- `https://{your custom name}.cognitiveservices.azure.com`
+
+ Remember that after you create a custom domain name, it _cannot_ be changed.
+
+ After you've entered your custom domain name, select **Save**.
+1. After the operation finishes, in the **Resource management** group, select **Keys and Endpoint**. Confirm that the new endpoint name of your resource starts this way: `https://{your custom name}.cognitiveservices.azure.com`.
# [PowerShell](#tab/powershell)
-To create a custom domain name using PowerShell, confirm that your computer has PowerShell version 7.x or later with the Azure PowerShell module version 5.1.0 or later. to see the versions of these tools, follow these steps:
+To create a custom domain name by using PowerShell, confirm that your computer has PowerShell version 7.x or later with the Azure PowerShell module version 5.1.0 or later. To see the versions of these tools, follow these steps:
-1. In a PowerShell window, type:
+1. In a PowerShell window, enter:
`$PSVersionTable`
- Confirm the PSVersion value is greater than 7.x. To upgrade PowerShell, follow instructions at [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell) to upgrade.
+ Confirm that the `PSVersion` value is 7.x or later. To upgrade PowerShell, follow the instructions at [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell).
-1. In a PowerShell window, type:
+1. In a PowerShell window, enter:
`Get-Module -ListAvailable Az`
- If nothing appears, or if Azure PowerShell module version is lower than 5.1.0,
- follow instructions at [Install Azure PowerShell module](/powershell/azure/install-Az-ps) to upgrade.
+ If nothing appears, or if that version of the Azure PowerShell module is earlier than 5.1.0, follow the instructions at [Install the Azure PowerShell module](/powershell/azure/install-Az-ps) to upgrade.
-Before proceeding, run `Connect-AzAccount` to create a connection with Azure.
+Before you proceed, run `Connect-AzAccount` to create a connection with Azure.
-## Verify custom domain name is available
+## Verify that a custom domain name is available
-Check whether the custom domain you would like to use is available.
-Follow these steps to confirm the domain is available using the [Check Domain Availability](/rest/api/cognitiveservices/accountmanagement/checkdomainavailability/checkdomainavailability) operation in the Cognitive Services REST API.
+Check whether the custom domain that you want to use is available.
+The following code confirms that the domain is available by using the [Check Domain Availability](/rest/api/cognitiveservices/accountmanagement/checkdomainavailability/checkdomainavailability) operation in the Cognitive Services REST API.
> [!TIP]
-> The code below will **NOT** work in Azure Cloud Shell.
+> The following code will *not* work in Azure Cloud Shell.
```azurepowershell $subId = "Your Azure subscription Id" $subdomainName = "custom domain name"
-# Select the Azure subscription that contains Speech resource.
+# Select the Azure subscription that contains the Speech resource.
# You can skip this step if your Azure account has only one active subscription. Set-AzContext -SubscriptionId $subId
-# Prepare OAuth token to use in request to Cognitive Services REST API.
+# Prepare the OAuth token to use in the request to the Cognitive Services REST API.
$Context = Get-AzContext $AccessToken = (Get-AzAccessToken -TenantId $Context.Tenant.Id).Token $token = ConvertTo-SecureString -String $AccessToken -AsPlainText -Force
-# Prepare and send the request to Cognitive Services REST API.
+# Prepare and send the request to the Cognitive Services REST API.
$uri = "https://management.azure.com/subscriptions/" + $subId + ` "/providers/Microsoft.CognitiveServices/checkDomainAvailability?api-version=2017-04-18" $body = @{
@@ -109,14 +100,14 @@ $jsonBody = $body | ConvertTo-Json
Invoke-RestMethod -Method Post -Uri $uri -ContentType "application/json" -Authentication Bearer ` -Token $token -Body $jsonBody | Format-List ```
-If the desired name is available, you will see a response like this:
+If the desired name is available, you'll see a response like this:
```azurepowershell isSubdomainAvailable : True reason : type : subdomainName : my-custom-name ```
-If the name is already taken, then you will see the following response:
+If the name is already taken, then you'll see the following response:
```azurepowershell isSubdomainAvailable : False reason : Sub domain name 'my-custom-name' is already used. Please pick a different name.
@@ -125,18 +116,17 @@ subdomainName : my-custom-name
``` ## Create your custom domain name
-To enable custom domain name for the selected Speech Resource, we use [Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount) cmdlet.
+To enable a custom domain name for the selected Speech resource, use the [Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount) cmdlet.
> [!WARNING]
-> After the code below runs successfully, you will create a custom domain name for your Speech resource.
-> This name **cannot** be changed. See more information in the **Warning** alert above.
+> After the following code runs successfully, you'll create a custom domain name for your Speech resource. Remember that this name *cannot* be changed.
```azurepowershell $resourceGroup = "Resource group name where Speech resource is located" $speechResourceName = "Your Speech resource name" $subdomainName = "custom domain name"
-# Select the Azure subscription that contains Speech resource.
+# Select the Azure subscription that contains the Speech resource.
# You can skip this step if your Azure account has only one active subscription. $subId = "Your Azure subscription Id" Set-AzContext -SubscriptionId $subId
@@ -151,13 +141,13 @@ Set-AzCognitiveServicesAccount -ResourceGroupName $resourceGroup `
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This section requires the latest version of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+This section requires the latest version of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed.
-## Verify the custom domain name is available
+## Verify that the custom domain name is available
-Check whether the custom domain you would like to use is free. We will use [Check Domain Availability](/rest/api/cognitiveservices/accountmanagement/checkdomainavailability/checkdomainavailability) method from Cognitive Services REST API.
+Check whether the custom domain that you want to use is free. Use the [Check Domain Availability](/rest/api/cognitiveservices/accountmanagement/checkdomainavailability/checkdomainavailability) method from the Cognitive Services REST API.
-Copy the code block below, insert your preferred custom domain name, and save to the file `subdomain.json`.
+Copy the following code block, insert your preferred custom domain name, and save to the file `subdomain.json`.
```json {
@@ -166,12 +156,12 @@ Copy the code block below, insert your preferred custom domain name, and save to
} ```
-Copy the file to your current folder or upload it to Azure Cloud Shell and run the following command. (Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with your Azure subscription ID).
+Copy the file to your current folder or upload it to Azure Cloud Shell and run the following command. Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with your Azure subscription ID.
```azurecli-interactive az rest --method post --url "https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.CognitiveServices/checkDomainAvailability?api-version=2017-04-18" --body @subdomain.json ```
-If the desired name is available, you will see a response like this:
+If the desired name is available, you'll see a response like this:
```azurecli { "isSubdomainAvailable": true,
@@ -181,7 +171,7 @@ If the desired name is available, you will see a response like this:
} ```
-If the name is already taken, then you will see the following response:
+If the name is already taken, then you'll see the following response:
```azurecli { "isSubdomainAvailable": false,
@@ -190,18 +180,18 @@ If the name is already taken, then you will see the following response:
"type": null } ```
-## Enable custom domain name
+## Enable a custom domain name
-To enable custom domain name for the selected Speech Resource, we use [az cognitiveservices account update](/cli/azure/cognitiveservices/account#az_cognitiveservices_account_update) command.
+To enable a custom domain name for the selected Speech resource, use the [az cognitiveservices account update](/cli/azure/cognitiveservices/account#az_cognitiveservices_account_update) command.
-Select the Azure subscription containing Speech resource. If your Azure account has only one active subscription, you can skip this step. (Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with your Azure subscription ID).
+Select the Azure subscription that contains the Speech resource. If your Azure account has only one active subscription, you can skip this step. Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with your Azure subscription ID.
```azurecli-interactive az account set --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ```
-Set the custom domain name to the selected resource. Replace the sample parameter values with the actual ones and run the command below.
+Set the custom domain name to the selected resource. Replace the sample parameter values with the actual ones and run the following command.
> [!WARNING]
-> After successful execution of the command below you will create a custom domain name for your Speech resource. This name **cannot** be changed. See more information in the caution alert above.
+> After successful execution of the following command, you'll create a custom domain name for your Speech resource. Remember that this name *cannot* be changed.
```azurecli az cognitiveservices account update --name my-speech-resource-name --resource-group my-resource-group-name --custom-domain my-custom-name
@@ -211,9 +201,17 @@ az cognitiveservices account update --name my-speech-resource-name --resource-gr
## Enable private endpoints
-We recommend using the [private DNS zone](../../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints, which we create by default during the provisioning process. However, if you are using your own DNS server, you may also need to change your DNS configuration, as shown in _DNS for private endpoints_, below. Decide on DNS strategy **before** provisioning private endpoint(s) for a production Speech resource, and test your DNS changes, especially if you use your own DNS server.
+We recommend using the [private DNS zone](../../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints. You create a private DNS zone by default during the provisioning process. If you're using your own DNS server, you might also need to change your DNS configuration.
+
+Decide on a DNS strategy *before* you provision private endpoints for a production Speech resource. And test your DNS changes, especially if you use your own DNS server.
+
+Use one of the following articles to create private endpoints. These articles use a web app as a sample resource to enable with private endpoints.
+
+- [Create a private endpoint by using the Azure portal](../../private-link/create-private-endpoint-portal.md)
+- [Create a private endpoint by using Azure PowerShell](../../private-link/create-private-endpoint-powershell.md)
+- [Create a private endpoint by using Azure CLI](../../private-link/create-private-endpoint-cli.md)
-Use one of the following articles to create private endpoint(s). The articles use a Web app as a sample resource to enable with private endpoints. You will use these parameters instead of those in the article:
+Use these parameters instead of the parameters in the article that you chose:
| Setting | Value | |---------------------|------------------------------------------|
@@ -221,199 +219,197 @@ Use one of the following articles to create private endpoint(s). The articles us
| Resource | **\<your-speech-resource-name>** | | Target sub-resource | **account** | -- [Create a Private Endpoint using the Azure portal](../../private-link/create-private-endpoint-portal.md)-- [Create a Private Endpoint using Azure PowerShell](../../private-link/create-private-endpoint-powershell.md)-- [Create a Private Endpoint using Azure CLI](../../private-link/create-private-endpoint-cli.md)-
-**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Cognitive Services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing these checks:
+**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Cognitive Services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
### Resolve DNS from the virtual network
-This check is **required**.
+This check is *required*.
-Follow these steps to test the custom DNS entry from your virtual network.
+Follow these steps to test the custom DNS entry from your virtual network:
-1. Log in to a virtual machine located in the virtual network to which you have attached your private endpoint.
-1. Open Windows Command Prompt or Bash shell, run `nslookup` and confirm it successfully resolves your resource custom domain name.
+1. Log in to a virtual machine located in the virtual network to which you've attached your private endpoint.
+1. Open a Windows command prompt or a Bash shell, run `nslookup`, and confirm that it successfully resolves your resource's custom domain name.
-```dos
-C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
-Server: UnKnown
-Address: 168.63.129.16
+ ```dos
+ C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
+ Server: UnKnown
+ Address: 168.63.129.16
-Non-authoritative answer:
-Name: my-private-link-speech.privatelink.cognitiveservices.azure.com
-Address: 172.28.0.10
-Aliases: my-private-link-speech.cognitiveservices.azure.com
-```
+ Non-authoritative answer:
+ Name: my-private-link-speech.privatelink.cognitiveservices.azure.com
+ Address: 172.28.0.10
+ Aliases: my-private-link-speech.cognitiveservices.azure.com
+ ```
-3. Confirm that the IP address matches the IP address of your private endpoint.
+1. Confirm that the IP address matches the IP address of your private endpoint.
### Resolve DNS from other networks
-Only perform this check if you plan to use your private endpoint enabled Speech resource in "hybrid" mode, where you have enabled either **All networks** or **Selected Networks and Private Endpoints** access option in the **Networking** section of your resource. If you plan to access the resource using only a private endpoint, you can skip this section.
+Perform this check only if you've enabled either the **All networks** option or the **Selected Networks and Private Endpoints** access option in the **Networking** section of your resource.
-1. Log in to a computer attached to a network allowed to access the resource.
-2. Open Windows Command Prompt or Bash shell, run `nslookup` and confirm it successfully resolves your resource custom domain name.
+If you plan to access the resource by using only a private endpoint, you can skip this section.
-```dos
-C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
-Server: UnKnown
-Address: fe80::1
+1. Log in to a computer attached to a network that's allowed to access the resource.
+2. Open a Windows command prompt or Bash shell, run `nslookup`, and confirm that it successfully resolves your resource's custom domain name.
-Non-authoritative answer:
-Name: vnetproxyv1-weu-prod.westeurope.cloudapp.azure.com
-Address: 13.69.67.71
-Aliases: my-private-link-speech.cognitiveservices.azure.com
- my-private-link-speech.privatelink.cognitiveservices.azure.com
- westeurope.prod.vnet.cog.trafficmanager.net
-```
+ ```dos
+ C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
+ Server: UnKnown
+ Address: fe80::1
+
+ Non-authoritative answer:
+ Name: vnetproxyv1-weu-prod.westeurope.cloudapp.azure.com
+ Address: 13.69.67.71
+ Aliases: my-private-link-speech.cognitiveservices.azure.com
+ my-private-link-speech.privatelink.cognitiveservices.azure.com
+ westeurope.prod.vnet.cog.trafficmanager.net
+ ```
3. Confirm that the IP address matches the IP address of your private endpoint. > [!NOTE]
-> The resolved IP address points to a virtual network proxy endpoint,
-> which dispatches the network traffic to the private endpoint for the Cognitive Services resource.
-> The behavior will be different for a resource with a custom domain name but *without* private endpoints.
-> See [this section](#dns-configuration) for details.
+> The resolved IP address points to a virtual network proxy endpoint, which dispatches the network traffic to the private endpoint for the Cognitive Services resource. The behavior will be different for a resource with a custom domain name but *without* private endpoints. See [this section](#dns-configuration) for details.
## Adjust existing applications and solutions
-A Speech resource with a custom domain enabled uses a different way to interact with Speech Services. This is true for a custom domain enabled Speech resource both with and without private endpoints. Information in this section applies to both scenarios.
+A Speech resource with a custom domain enabled uses a different way to interact with Speech Services. This is true for a custom-domain-enabled Speech resource both with and without private endpoints. Information in this section applies to both scenarios.
-### Use Speech resource with custom domain name and private endpoint enabled
+### Use a Speech resource with a custom domain name and a private endpoint enabled
-A Speech resource with custom domain name and private endpoint enabled uses a different way to interact with Speech Services. This section explains how to use such resource with Speech Services REST API and [Speech SDK](speech-sdk.md).
+A Speech resource with a custom domain name and a private endpoint enabled uses a different way to interact with Speech Services. This section explains how to use such a resource with the Speech Services REST APIs and the [Speech SDK](speech-sdk.md).
> [!NOTE]
-> Please note, that a Speech Resource without private endpoints, but with **custom domain name** enabled also has a special way of interacting with Speech Services, but this way differs from scenario of a private endpoint enabled Speech Resource. If you have such resource (say, you had a resource with private endpoints, but then decided to remove them) ensure to get familiar with the [correspondent section](#use-speech-resource-with-custom-domain-name-without-private-endpoints).
-
-#### Speech resource with custom domain name and private endpoint. Usage with REST API
-
-We will use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
+> A Speech resource without private endpoints but with a custom domain name enabled also has a special way of interacting with Speech Services. This way differs from the scenario of a private-endpoint-enabled Speech resource. If you have such resource (for example, you had a resource with private endpoints but then decided to remove them), see the section [Use a Speech resource with a custom domain name and without private endpoints](#use-a-speech-resource-with-a-custom-domain-name-and-without-private-endpoints).
-##### Note on Speech Services REST API
+#### Speech resource with a custom domain name and a private endpoint: Usage with the REST APIs
-Speech Services has REST API for [Speech-to-text](rest-speech-to-text.md) and [Text-to-speech](rest-text-to-speech.md). The following needs to be considered for the private endpoint enabled scenario.
+We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
-Speech-to-text has two different REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when used sing in private endpoint enabled scenario.
+Speech Services has REST APIs for [Speech-to-Text](rest-speech-to-text.md) and [Text-to-Speech](rest-text-to-speech.md). Consider the following information for the private-endpoint-enabled scenario.
-The Speech-to-text REST APIs are:
-- [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](/azure/cognitive-services/speech-service/migrate-v2-to-v3).-- [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) is used for OnLine transcription.
+Speech-to-Text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario.
-Usage of Speech-to-text REST API for short audio and Text-to-speech REST API in the private endpoint scenario is the same and equivalent to [Speech SDK case](#speech-resource-with-custom-domain-name-and-private-endpoint-usage-with-speech-sdk) described later in this article.
+The Speech-to-Text REST APIs are:
+- [Speech-to-Text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](/azure/cognitive-services/speech-service/migrate-v2-to-v3)
+- [Speech-to-Text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio), which is used for online transcription
-Speech-to-text REST API v3.0 is using a different set of endpoints and thus requires a different approach for the private endpoint enabled scenario.
+Usage of the Speech-to-Text REST API for short audio and the text-to-speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article.
-Both cases are described in the next subsections.
+Speech-to-Text REST API v3.0 uses a different set of endpoints, so it requires a different approach for the private-endpoint-enabled scenario.
+The next subsections describe both cases.
-##### Speech-to-text REST API v3.0
+##### Speech-to-Text REST API v3.0
-Usually Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`
+Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech-to-Text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`.
This is a sample request URL: ```http https://westeurope.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions ```
-After enabling custom domain for a Speech resource (which is necessary for private endpoints) such resource will use the following DNS name pattern for the basic REST API endpoint: <p/>`{your custom name}.cognitiveservices.azure.com`
+After you enable a custom domain for a Speech resource (which is necessary for private endpoints), that resource will use the following DNS name pattern for the basic REST API endpoint: <p/>`{your custom name}.cognitiveservices.azure.com`.
-That means that in our example the REST API endpoint name will be: <p/>`my-private-link-speech.cognitiveservices.azure.com`
+That means that in our example, the REST API endpoint name will be: <p/>`my-private-link-speech.cognitiveservices.azure.com`.
-And the sample request URL above needs to be converted to:
+And the sample request URL needs to be converted to:
```http https://my-private-link-speech.cognitiveservices.azure.com/speechtotext/v3.0/transcriptions ```
-This URL should be reachable from the virtual network with the private endpoint attached (provided the [correct DNS resolution](#resolve-dns-from-the virtual-network)).
+This URL should be reachable from the virtual network with the private endpoint attached (provided the [correct DNS resolution](#resolve-dns-from-the-virtual-network)).
-Typically after enabling custom domain name for a Speech resource, you will replace hostname in all request URLs with the new custom domain hostname. All other parts of the request (like the path `/speechtotext/v3.0/transcriptions` in the example above) remain the same.
+After you enable a custom domain name for a Speech resource, you typically replace the host name in all request URLs with the new custom domain host name. All other parts of the request (like the path `/speechtotext/v3.0/transcriptions` in the earlier example) remain the same.
> [!TIP]
-> Some customers developed applications that use the region part of the regional endpoint DNS name (for example to send the request to the Speech resource deployed in the particular Azure Region).
+> Some customers develop applications that use the region part of the regional endpoint's DNS name (for example, to send the request to the Speech resource deployed in the particular Azure region).
>
-> Speech resource custom domain name contains **no** information about the region where the resource is deployed. So the application logic described above will **not** work and needs to be altered.
+> A custom domain for a Speech resource contains *no* information about the region where the resource is deployed. So the application logic described earlier will *not* work and needs to be altered.
-##### Speech-to-text REST API for short audio and Text-to-speech REST API
+##### Speech-to-Text REST API for short audio and text-to-speech REST API
-[Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) and [Text-to-speech REST API](rest-text-to-speech.md) use two types of endpoints:
+The [Speech-to-Text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) and the [text-to-speech REST API](rest-text-to-speech.md) use two types of endpoints:
- [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the Cognitive Services REST API to obtain an authorization token - Special endpoints for all other operations
-The detailed description of the special endpoints and how their URL should be transformed for a private endpoint enabled Speech resource is provided in [this subsection](#general-principle) of "Usage with Speech SDK" section below. The same principle described for SDK applies for the Speech-to-text v1.0 and Text-to-speech REST API.
+The detailed description of the special endpoints and how their URL should be transformed for a private-endpoint-enabled Speech resource is provided in [this subsection](#general-principles) about usage with the Speech SDK. The same principle described for the SDK applies for the Speech-to-Text REST API v1.0 and the text-to-speech REST API.
-Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. (The example describes Text-to-speech REST API; usage of Speech-to-text REST API for short audio is fully equivalent)
+Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. The example describes the text-to-speech REST API. Usage of the Speech-to-Text REST API for short audio is fully equivalent.
> [!NOTE]
-> When using **Speech-to-text REST API for short audio** in private endpoint scenarios, use an Authorization token [passed through](rest-speech-to-text.md#request-headers) `Authorization` [header](rest-speech-to-text.md#request-headers). Passing Speech subscription key to the special endpoint via `Ocp-Apim-Subscription-Key` header will **not** work and will generate Error 401.
+> When you're using the Speech-to-Text REST API for short audio in private endpoint scenarios, use an authorization token [passed through](rest-speech-to-text.md#request-headers) the `Authorization` [header](rest-speech-to-text.md#request-headers). Passing a speech subscription key to the special endpoint via the `Ocp-Apim-Subscription-Key` header will *not* work and will generate Error 401.
-**Text-to-speech REST API usage example.**
+**Text-to-Speech REST API usage example**
-We will use West Europe as a sample Azure Region and `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain). Custom domain name `my-private-link-speech.cognitiveservices.azure.com` in our example belongs to the Speech resource created in West Europe region.
+We'll use West Europe as a sample Azure region and `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain). The custom domain name `my-private-link-speech.cognitiveservices.azure.com` in our example belongs to the Speech resource created in the West Europe region.
-To get the list of the voices supported in the region one needs to do the following two operations:
+To get the list of the voices supported in the region, do the following two operations:
-- Obtain authorization token:
-```http
-https://westeurope.api.cognitive.microsoft.com/sts/v1.0/issuetoken
-```
-- Using the token, get the list of voices:
-```http
-https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list
-```
-(See more details on the steps above in [Text-to-speech REST API documentation](rest-text-to-speech.md))
+- Obtain an authorization token:
+ ```http
+ https://westeurope.api.cognitive.microsoft.com/sts/v1.0/issuetoken
+ ```
+- By using the token, get the list of voices:
+ ```http
+ https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list
+ ```
+See more details on the preceding steps in the [text-to-speech REST API documentation](rest-text-to-speech.md).
-For the private endpoint enabled Speech resource the endpoint URLs for the same operation sequence need to be modified. The same sequence will look like this:
-- Obtain authorization token via
-```http
-https://my-private-link-speech.cognitiveservices.azure.com/v1.0/issuetoken
-```
-(see detailed explanation in [Speech-to-text REST API v3.0](#speech-to-text-rest-api-v30) subsection above)
-- Using the obtained token get the list of voices via
-```http
-https://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices/voices/list
-```
-(see detailed explanation in [General principle](#general-principle) subsection of "Usage with Speech SDK" section below)
+For the private-endpoint-enabled Speech resource, the endpoint URLs for the same operation sequence need to be modified. The same sequence will look like this:
-#### Speech resource with custom domain name and private endpoint. Usage with Speech SDK
+- Obtain an authorization token:
+ ```http
+ https://my-private-link-speech.cognitiveservices.azure.com/v1.0/issuetoken
+ ```
+ See the detailed explanation in the earlier [Speech-to-Text REST API v3.0](#speech-to-text-rest-api-v30) subsection.
-Using Speech SDK with custom domain name and private endpoint enabled Speech resources requires the review and likely changes of your application code.
+- By using the obtained token, get the list of voices:
+ ```http
+ https://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices/voices/list
+ ```
+ See a detailed explanation in the [General principles](#general-principles) subsection for the Speech SDK.
-We will use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
+#### Speech resource with a custom domain name and a private endpoint: Usage with the Speech SDK
-##### General principle
+Using the Speech SDK with a custom domain name and private-endpoint-enabled Speech resources requires you to review and likely change your application code.
-Usually in SDK scenarios (as well as in the Text-to-speech REST API scenarios) Speech resources use the dedicated regional endpoints for different service offerings. The DNS name format for these endpoints is: </p>`{region}.{speech service offering}.speech.microsoft.com`
+We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
-Example: </p>`westeurope.stt.speech.microsoft.com`
+##### General principles
-All possible values for the region (first element of the DNS name) are listed [here](regions.md) This table below presents the possible value for the Speech Services offering (second element of the DNS name):
+Usually in SDK scenarios (as well as in the text-to-speech REST API scenarios), Speech resources use the dedicated regional endpoints for different service offerings. The DNS name format for these endpoints is:
-| DNS name value | Speech Services offering |
+`{region}.{speech service offering}.speech.microsoft.com`
+
+An example DNS name is:
+
+`westeurope.stt.speech.microsoft.com`
+
+All possible values for the region (first element of the DNS name) are listed in [Speech service supported regions](regions.md). The following table presents the possible values for the Speech Services offering (second element of the DNS name):
+
+| DNS name value | Speech service offering |
|----------------|-------------------------------------------------------------| | `commands` | [Custom Commands](custom-commands.md) | | `convai` | [Conversation Transcription](conversation-transcription.md) | | `s2s` | [Speech Translation](speech-translation.md) |
-| `stt` | [Speech-to-text](speech-to-text.md) |
-| `tts` | [Text-to-speech](text-to-speech.md) |
+| `stt` | [Speech-to-Text](speech-to-text.md) |
+| `tts` | [Text-to-Speech](text-to-speech.md) |
| `voice` | [Custom Voice](how-to-custom-voice.md) |
-So the example above (`westeurope.stt.speech.microsoft.com`) stands for Speech-to-text endpoint in West Europe.
+So the earlier example (`westeurope.stt.speech.microsoft.com`) stands for a Speech-to-Text endpoint in West Europe.
-Private endpoint enabled endpoints communicate with Speech Services via a special proxy and because of that **you must change the endpoint connection URLs**.
+Private-endpoint-enabled endpoints communicate with Speech Services via a special proxy. Because of that, *you must change the endpoint connection URLs*.
A "standard" endpoint URL looks like: <p/>`{region}.{speech service offering}.speech.microsoft.com/{URL path}` A private endpoint URL looks like: <p/>`{your custom name}.cognitiveservices.azure.com/{speech service offering}/{URL path}`
-**Example 1.** Application is communicating using the following URL (speech recognition using base model for US English in West Europe):
+**Example 1.** An application is communicating by using the following URL (speech recognition using the base model for US English in West Europe):
``` wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US ```
-To use it in the private endpoint enabled scenario when custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com` you must modify the URL like this:
+To use it in the private-endpoint-enabled scenario when the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`, you must modify the URL like this:
``` wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
@@ -421,97 +417,96 @@ wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/
Notice the details: -- Hostname `westeurope.stt.speech.microsoft.com` is replaced by the custom domain hostname `my-private-link-speech.cognitiveservices.azure.com`.-- Second element of the original DNS name (`stt`) becomes the first element of the URL path and precedes the original path. So the original URL `/speech/recognition/conversation/cognitiveservices/v1?language=en-US` becomes `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
+- The host name `westeurope.stt.speech.microsoft.com` is replaced by the custom domain host name `my-private-link-speech.cognitiveservices.azure.com`.
+- The second element of the original DNS name (`stt`) becomes the first element of the URL path and precedes the original path. So the original URL `/speech/recognition/conversation/cognitiveservices/v1?language=en-US` becomes `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
-**Example 2.** Application uses the following URL to synthesize speech in West Europe using a custom voice model):
+**Example 2.** An application uses the following URL to synthesize speech in West Europe by using a custom voice model:
```http https://westeurope.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId=974481cc-b769-4b29-af70-2fb557b897c4 ```
-Following is an equivalent URL that uses a private endpoint enabled where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
+The following equivalent URL uses a private endpoint enabled, where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
```http https://my-private-link-speech.cognitiveservices.azure.com/voice/cognitiveservices/v1?deploymentId=974481cc-b769-4b29-af70-2fb557b897c4 ```
-The same principle as in Example 1 is applied, but the key element this time is `voice`.
+The same principle in Example 1 is applied, but the key element this time is `voice`.
-##### Modify applications
+##### Modifying applications
Follow these steps to modify your code:
-**1. Determine application endpoint URL**
--- [Enable logging for your application](how-to-use-logging.md) and run it to log activity.-- In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL your application used to reach the Speech service.-
-Example:
-
-```
-(114917): 41ms SPX_DBG_TRACE_VERBOSE: property_bag_impl.cpp:138 ISpxPropertyBagImpl::LogPropertyAndValue: this=0x0000028FE4809D78; name='SPEECH-ConnectionUrl'; value='wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?traffictype=spx&language=en-US'
-```
-
-So the URL used by the application in this example is:
-
-```
-wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
-```
-
-**2. Create `SpeechConfig` instance using full endpoint URL**
-
-Modify the endpoint you determined in the previous section as described in [General principle](#general-principle) above.
-
-Now modify how you create the instance of `SpeechConfig`. Most likely your today's application is using something like this:
-```csharp
-var config = SpeechConfig.FromSubscription(subscriptionKey, azureRegion);
-```
-This will not work for private endpoint enabled Speech resource because of the hostname and URL changes we described in the previous sections. If you try to run your existing application without any modifications using the Key of a private endpoint enabled resource, you will get Authentication error (401).
-
-To make it work, modify how you instantiate `SpeechConfig` class and use "from endpoint" / "with endpoint" initialization. Suppose we have the following two variables defined:
-- `subscriptionKey` containing the Key of the private endpoint enabled Speech resource-- `endPoint` containing the full **modified** endpoint URL (using the type required by the correspondent programming language). In our example this variable should contain
-```
-wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
-```
-
-Next, create a `SpeechConfig` instance:
-```csharp
-var config = SpeechConfig.FromEndpoint(endPoint, subscriptionKey);
-```
-```cpp
-auto config = SpeechConfig::FromEndpoint(endPoint, subscriptionKey);
-```
-```java
-SpeechConfig config = SpeechConfig.fromEndpoint(endPoint, subscriptionKey);
-```
-```python
-import azure.cognitiveservices.speech as speechsdk
-speech_config = speechsdk.SpeechConfig(endpoint=endPoint, subscription=subscriptionKey)
-```
-```objectivec
-SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithEndpoint:endPoint subscription:subscriptionKey];
-```
+1. Determine the application endpoint URL:
+
+ - [Enable logging for your application](how-to-use-logging.md) and run it to log activity.
+ - In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL that your application used to reach Speech Services.
+
+ Example:
+
+ ```
+ (114917): 41ms SPX_DBG_TRACE_VERBOSE: property_bag_impl.cpp:138 ISpxPropertyBagImpl::LogPropertyAndValue: this=0x0000028FE4809D78; name='SPEECH-ConnectionUrl'; value='wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?traffictype=spx&language=en-US'
+ ```
+
+ So the URL that the application used in this example is:
+
+ ```
+ wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
+ ```
+
+2. Create a `SpeechConfig` instance by using a full endpoint URL:
+
+ 1. Modify the endpoint that you just determined, as described in the earlier [General principles](#general-principles) section.
+
+ 1. Modify how you create the instance of `SpeechConfig`. Most likely, your application is using something like this:
+ ```csharp
+ var config = SpeechConfig.FromSubscription(subscriptionKey, azureRegion);
+ ```
+ This won't work for a private-endpoint-enabled Speech resource because of the host name and URL changes that we described in the previous sections. If you try to run your existing application without any modifications by using the key of a private-endpoint-enabled resource, you'll get an authentication error (401).
+
+ To make it work, modify how you instantiate the `SpeechConfig` class and use "from endpoint"/"with endpoint" initialization. Suppose we have the following two variables defined:
+ - `subscriptionKey` contains the key of the private-endpoint-enabled Speech resource.
+ - `endPoint` contains the full *modified* endpoint URL (using the type required by the corresponding programming language). In our example, this variable should contain:
+ ```
+ wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
+ ```
+
+ Create a `SpeechConfig` instance:
+ ```csharp
+ var config = SpeechConfig.FromEndpoint(endPoint, subscriptionKey);
+ ```
+ ```cpp
+ auto config = SpeechConfig::FromEndpoint(endPoint, subscriptionKey);
+ ```
+ ```java
+ SpeechConfig config = SpeechConfig.fromEndpoint(endPoint, subscriptionKey);
+ ```
+ ```python
+ import azure.cognitiveservices.speech as speechsdk
+ speech_config = speechsdk.SpeechConfig(endpoint=endPoint, subscription=subscriptionKey)
+ ```
+ ```objectivec
+ SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithEndpoint:endPoint subscription:subscriptionKey];
+ ```
> [!TIP]
-> The query parameters specified in the endpoint URI are not changed, even if they are set by any other APIs. For example, if the
-> recognition language is defined in the URI as query parameter "language=en-US", and is also set to "ru-RU" via the correspondent
-> property, the language setting in the URI is used, and the effective language is "en-US". Parameters set in the endpoint URI always
-> take precidence. Only parameters that are not specified in the endpoint URI can be overridden by other APIs.
+> The query parameters specified in the endpoint URI are not changed, even if they're set by other APIs. For example, if the recognition language is defined in the URI as query parameter `language=en-US`, and is also set to `ru-RU` via the corresponding property, the language setting in the URI is used. The effective language is then `en-US`.
+>
+> Parameters set in the endpoint URI always take precedence. Other APIs can override only parameters that are not specified in the endpoint URI.
-After this modification your application should work with the private enabled Speech resources. We are working on more seamless support of private endpoint scenario.
+After this modification, your application should work with the private-endpoint-enabled Speech resources. We're working on more seamless support of private endpoint scenarios.
-### Use Speech resource with custom domain name without private endpoints
+### Use a Speech resource with a custom domain name and without private endpoints
-In this article we have pointed out several times, that enabling custom domain for a Speech resource is **irreversible** and such resource will use a different way of communicating with Speech services comparing to the "usual" ones (that is the ones, that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints)).
+In this article, we've pointed out several times that enabling a custom domain for a Speech resource is *irreversible*. Such a resource will use a different way of communicating with Speech Services, compared to the ones that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
-This section explains how to use a Speech resource with enabled custom domain name but **without** any private endpoints with Speech Services REST API and [Speech SDK](speech-sdk.md). This may be a resource that was once used in a private endpoint scenario, but then had its private endpoint(s) deleted.
+This section explains how to use a Speech resource with an enabled custom domain name but *without* any private endpoints with the Speech Services REST APIs and [Speech SDK](speech-sdk.md). This might be a resource that was once used in a private endpoint scenario, but then had its private endpoints deleted.
#### DNS configuration
-Remember how a custom domain DNS name of the private endpoint enabled Speech resource is [resolved from public networks](#resolve-dns-from-other-networks). In this case IP address resolved points to a VNet Proxy endpoint, which is used for dispatching the network traffic to the private endpoint enabled Cognitive Services resource.
+Remember how a custom domain DNS name of the private-endpoint-enabled Speech resource is [resolved from public networks](#resolve-dns-from-other-networks). In this case, the IP address resolved points to a proxy endpoint for a virtual network. That endpoint is used for dispatching the network traffic to the private-endpoint-enabled Cognitive Services resource.
-However when **all** resource private endpoints are removed (or right after the enabling of the custom domain name) CNAME record of the Speech resource is reprovisioned and now points to the IP address of the correspondent [Cognitive Services regional endpoint](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
+However, when *all* resource private endpoints are removed (or right after the enabling of the custom domain name), the CNAME record of the Speech resource is reprovisioned. It now points to the IP address of the corresponding [Cognitive Services regional endpoint](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
So the output of the `nslookup` command will look like this: ```dos
@@ -531,84 +526,78 @@ Aliases: my-private-link-speech.cognitiveservices.azure.com
``` Compare it with the output from [this section](#resolve-dns-from-other-networks).
-#### Speech resource with custom domain name without private endpoints. Usage with REST API
+#### Speech resource with a custom domain name and without private endpoints: Usage with the REST APIs
-##### Speech-to-text REST API v3.0
+##### Speech-to-Text REST API v3.0
-Speech-to-text REST API v3.0 usage is fully equivalent to the case of [private endpoint enabled Speech resources](#speech-to-text-rest-api-v30).
+Speech-to-Text REST API v3.0 usage is fully equivalent to the case of [private-endpoint-enabled Speech resources](#speech-to-text-rest-api-v30).
-##### Speech-to-text REST API for short audio and Text-to-speech REST API
+##### Speech-to-Text REST API for short audio and text-to-speech REST API
-In this case Speech-to-text REST API for short audio and Text-to-speech REST API usage has no differences to the general case with one exception for Speech-to-text REST API for short audio (see Note below). Both APIs should be used as described in [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) and [Text-to-speech REST API](rest-text-to-speech.md) documentation.
+In this case, usage of the Speech-to-Text REST API for short audio and usage of the text-to-speech REST API have no differences from the general case, with one exception for the Speech-to-Text REST API for short audio. (See the following note.) You should use both APIs as described in the [speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) and [text-to-speech REST API](rest-text-to-speech.md) documentation.
> [!NOTE]
-> When using **Speech-to-text REST API for short audio** in custom domain scenarios, use an Authorization token [passed through](rest-speech-to-text.md#request-headers) `Authorization` [header](rest-speech-to-text.md#request-headers). Passing Speech subscription key to the special endpoint via `Ocp-Apim-Subscription-Key` header will **not** work and will generate Error 401.
+> When you're using the Speech-to-Text REST API for short audio in custom domain scenarios, use an authorization token [passed through](rest-speech-to-text.md#request-headers) an `Authorization` [header](rest-speech-to-text.md#request-headers). Passing a speech subscription key to the special endpoint via the `Ocp-Apim-Subscription-Key` header will *not* work and will generate Error 401.
+
+#### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
-#### Speech resource with custom domain name without private endpoints. Usage with Speech SDK
+Using the Speech SDK with custom-domain-enabled Speech resources *without* private endpoints requires the review of, and likely changes to, your application code. Note that these changes are different from the case of a [private-endpoint-enabled Speech resource](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk). We're working on more seamless support of private endpoint and custom domain scenarios.
-Using Speech SDK with custom domain name enabled Speech resources **without** private endpoints requires the review and likely changes of your application code. Note that these changes are **different** comparing to the case of a [private endpoint enabled Speech resource](#speech-resource-with-custom-domain-name-and-private-endpoint-usage-with-speech-sdk). We are working on more seamless support of private endpoint / custom domain scenario.
+We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
-We will use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
+In the section on [private-endpoint-enabled Speech resources](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), we explained how to determine the endpoint URL, modify it, and make it work through "from endpoint"/"with endpoint" initialization of the `SpeechConfig` class instance.
-In the section on [private endpoint enabled Speech resource](#speech-resource-with-custom-domain-name-and-private-endpoint-usage-with-speech-sdk) we explained how to determine the endpoint URL used, modify it and make it work through "from endpoint"/"with endpoint" initialization of the `SpeechConfig` class instance.
+However, if you try to run the same application after having all private endpoints removed (allowing some time for the corresponding DNS record reprovisioning), you'll get an internal service error (404). The reason is that the [DNS record](#dns-configuration) now points to the regional Cognitive Services endpoint instead of the virtual network proxy, and the URL paths like `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US` won't be found there.
-However if you try to run the same application after having all private endpoints removed (allowing some time to the correspondent DNS record reprovisioning) you will get Internal service error (404). The reason is the [DNS record](#dns-configuration) that now points to the regional Cognitive Services endpoint instead of the VNet proxy, and the URL paths like `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US` will not be found there, hence the "Not found" error (404).
+If you roll back your application to the standard instantiation of `SpeechConfig` in the style of the following code, your application will terminate with the authentication error (401):
-If you "roll-back" your application to the "standard" instantiation of `SpeechConfig` in the style of
```csharp var config = SpeechConfig.FromSubscription(subscriptionKey, azureRegion); ```
-your application will terminate with the Authentication error (401).
##### Modifying applications To let your application use a Speech resource with a custom domain name and without private endpoints, follow these steps:
-**1. Request Authorization Token from the Cognitive Services REST API**
-
-[This article](../authentication.md#authenticate-with-an-authentication-token) shows how to get the token using the Cognitive Services REST API.
-
-Use your custom domain name in the endpoint URL, that is in our example this URL is:
-```http
-https://my-private-link-speech.cognitiveservices.azure.com/sts/v1.0/issueToken
-```
-> [!TIP]
-> You can find this URL in Azure portal. On your Speech resource page, under the under the **Resource management** group, select **Keys and Endpoint**.
-
-**2. Create a `SpeechConfig` instance using "from authorization token" / "with authorization token" method.**
-
-Create a `SpeechConfig` instance using the authorization token you obtained in the previous section. Suppose we have the following variables defined:
--- `token`: the authorization token obtained in the previous section-- `azureRegion`: the name of the Speech resource [region](regions.md) (example: `westeurope`)-- `outError`: (only for [Objective C](/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithauthorizationtokenregionerror) case)-
-Next, create a `SpeechConfig` instance:
-
-```csharp
-var config = SpeechConfig.FromAuthorizationToken(token, azureRegion);
-```
-```cpp
-auto config = SpeechConfig::FromAuthorizationToken(token, azureRegion);
-```
-```java
-SpeechConfig config = SpeechConfig.fromAuthorizationToken(token, azureRegion);
-```
-```python
-import azure.cognitiveservices.speech as speechsdk
-speech_config = speechsdk.SpeechConfig(auth_token=token, region=azureRegion)
-```
-```objectivec
-SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithAuthorizationToken:token region:azureRegion error:outError];
-```
+1. Request an authorization token from the Cognitive Services REST API. [This article](../authentication.md#authenticate-with-an-authentication-token) shows how to get the token.
+
+ Use your custom domain name in the endpoint URL. In our example, this URL is:
+ ```http
+ https://my-private-link-speech.cognitiveservices.azure.com/sts/v1.0/issueToken
+ ```
+ > [!TIP]
+ > You can find this URL in the Azure portal. On your Speech resource page, under the **Resource management** group, select **Keys and Endpoint**.
+
+1. Create a `SpeechConfig` instance by using the authorization token that you obtained in the previous section. Suppose we have the following variables defined:
+
+ - `token`: the authorization token obtained in the previous section
+ - `azureRegion`: the name of the Speech resource [region](regions.md) (example: `westeurope`)
+ - `outError`: (only for the [Objective C](/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithauthorizationtokenregionerror) case)
+
+ Create a `SpeechConfig` instance like this:
+
+ ```csharp
+ var config = SpeechConfig.FromAuthorizationToken(token, azureRegion);
+ ```
+ ```cpp
+ auto config = SpeechConfig::FromAuthorizationToken(token, azureRegion);
+ ```
+ ```java
+ SpeechConfig config = SpeechConfig.fromAuthorizationToken(token, azureRegion);
+ ```
+ ```python
+ import azure.cognitiveservices.speech as speechsdk
+ speech_config = speechsdk.SpeechConfig(auth_token=token, region=azureRegion)
+ ```
+ ```objectivec
+ SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithAuthorizationToken:token region:azureRegion error:outError];
+ ```
> [!NOTE]
-> The caller needs to ensure that the authorization token is valid.
-> Before the authorization token expires, the caller needs to refresh it by calling this setter with a new valid token.
-> As configuration values are copied when creating a new recognizer or synthesizer, the new token value will not apply to recognizers or synthesizers that have already been created.
-> For these, set the authorization token of the corresponding recognizer or synthesizer to refresh the token.
-> If you don't refresh the token, the the recognizer or synthesizer will encounter errors while operating.
+> The caller needs to ensure that the authorization token is valid. Before the authorization token expires, the caller needs to refresh it by calling this setter with a new valid token. Because configuration values are copied when you're creating a new recognizer or synthesizer, the new token value will not apply to recognizers or synthesizers that have already been created.
+>
+> For these, set the authorization token of the corresponding recognizer or synthesizer to refresh the token. If you don't refresh the token, the recognizer or synthesizer will encounter errors while operating.
-After this modification your application should work with Speech resources that use a custom domain name without private endpoints.
+After this modification, your application should work with Speech resources that use a custom domain name without private endpoints.
## Pricing
@@ -618,5 +607,5 @@ For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
* [Azure Private Link](../../private-link/private-link-overview.md) * [Speech SDK](speech-sdk.md)
-* [Speech-to-text REST API](rest-speech-to-text.md)
-* [Text-to-speech REST API](rest-text-to-speech.md)
+* [Speech-to-Text REST API](rest-speech-to-text.md)
+* [Text-to-Speech REST API](rest-text-to-speech.md)
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-faq.md
@@ -50,6 +50,10 @@ See more [detailed guidance](container-instances-troubleshooting.md#container-ta
Use the smallest image that satisfies your requirements. For Linux, you could use a *runtime-alpine* .NET Core image, which has been supported since the release of .NET Core 2.1. For Windows, if you are using the full .NET Framework, then you need to use a Windows Server Core image (runtime-only image, such as *4.7.2-windowsservercore-ltsc2016*). Runtime-only images are smaller but do not support workloads that require the .NET SDK.
+### What types of container registries are compatible with ACI?
+
+ACI supports image pulls from ACR and other third-party container registries such as DockerHub. ACI also supports image pulls from on-premise registries as long as they are OCR-compatible and have an endpoint that is publicly exposed to the internet.
+ ## Availability and quotas ### How many cores and memory should I allocate for my containers or the container group?
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cosmosdb-monitor-resource-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-monitor-resource-logs.md
@@ -30,27 +30,55 @@ Platform metrics and the Activity logs are collected automatically, whereas you
* **DataPlaneRequests**: Select this option to log back-end requests to all APIs, which include SQL, Graph, MongoDB, Cassandra, and Table API accounts in Azure Cosmos DB. Key properties to note are: `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId`, and `resourceTokenPermissionMode`.
- ```json
+ ```json
{ "time": "2019-04-23T23:12:52.3814846Z", "resourceId": "/SUBSCRIPTIONS/<your_subscription_ID>/RESOURCEGROUPS/<your_resource_group>/PROVIDERS/MICROSOFT.DOCUMENTDB/DATABASEACCOUNTS/<your_database_account>", "category": "DataPlaneRequests", "operationName": "ReadFeed", "properties": {"activityId": "66a0c647-af38-4b8d-a92a-c48a805d6460","requestResourceType": "Database","requestResourceId": "","collectionRid": "","statusCode": "200","duration": "0","userAgent": "Microsoft.Azure.Documents.Common/2.2.0.0","clientIpAddress": "10.0.0.24","requestCharge": "1.000000","requestLength": "0","responseLength": "372", "resourceTokenPermissionId": "perm-prescriber-app","resourceTokenPermissionMode": "all", "resourceTokenUserRid": "","region": "East US","partitionId": "062abe3e-de63-4aa5-b9de-4a77119c59f8","keyType": "PrimaryReadOnlyMasterKey","databaseName": "","collectionName": ""}}
- ```
+ ```
+
+ Use the following query to get logs corresponding to data plane requests:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
+ ```
* **MongoRequests**: Select this option to log user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for MongoDB. This log type is not available for other API accounts. Key properties to note are: `Requestcharge`, `opCode`. When you enable MongoRequests in diagnostics logs, make sure to turn off the DataPlaneRequests. You would see one log for every request made on the API. ```json { "time": "2019-04-10T15:10:46.7820998Z", "resourceId": "/SUBSCRIPTIONS/<your_subscription_ID>/RESOURCEGROUPS/<your_resource_group>/PROVIDERS/MICROSOFT.DOCUMENTDB/DATABASEACCOUNTS/<your_database_account>", "category": "MongoRequests", "operationName": "ping", "properties": {"activityId": "823cae64-0000-0000-0000-000000000000","opCode": "MongoOpCode_OP_QUERY","errorCode": "0","duration": "0","requestCharge": "0.000000","databaseName": "admin","collectionName": "$cmd","retryCount": "0"}} ```
+
+ Use the following query to get logs corresponding to MongoDB requests:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="MongoRequests"
+ ```
* **CassandraRequests**: Select this option to log user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Cassandra. This log type is not available for other API accounts. The key properties to note are `operationName`, `requestCharge`, `piiCommandText`. When you enable CassandraRequests in diagnostics logs, make sure to turn off the DataPlaneRequests. You would see one log for every request made on the API. ```json { "time": "2020-03-30T23:55:10.9579593Z", "resourceId": "/SUBSCRIPTIONS/<your_subscription_ID>/RESOURCEGROUPS/<your_resource_group>/PROVIDERS/MICROSOFT.DOCUMENTDB/DATABASEACCOUNTS/<your_database_account>", "category": "CassandraRequests", "operationName": "QuerySelect", "properties": {"activityId": "6b33771c-baec-408a-b305-3127c17465b6","opCode": "<empty>","errorCode": "-1","duration": "0.311900","requestCharge": "1.589237","databaseName": "system","collectionName": "local","retryCount": "<empty>","authorizationTokenType": "PrimaryMasterKey","address": "104.42.195.92","piiCommandText": "{"request":"SELECT key from system.local"}","userAgent": """"}} ```
+
+ Use the following query to get logs corresponding to Cassandra requests:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="CassandraRequests"
+ ```
* **GremlinRequests**: Select this option to log user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Gremlin. This log type is not available for other API accounts. The key properties to note are `operationName` and `requestCharge`. When you enable GremlinRequests in diagnostics logs, make sure to turn off the DataPlaneRequests. You would see one log for every request made on the API. ```json { "time": "2021-01-06T19:36:58.2554534Z", "resourceId": "/SUBSCRIPTIONS/<your_subscription_ID>/RESOURCEGROUPS/<your_resource_group>/PROVIDERS/MICROSOFT.DOCUMENTDB/DATABASEACCOUNTS/<your_database_account>", "category": "GremlinRequests", "operationName": "eval", "properties": {"activityId": "b16bd876-0e5c-4448-90d1-7f3134c6b5ff", "errorCode": "200", "duration": "9.6036", "requestCharge": "9.059999999999999", "databaseName": "GraphDemoDatabase", "collectionName": "GraphDemoContainer", "authorizationTokenType": "PrimaryMasterKey", "address": "98.225.2.189", "estimatedDelayFromRateLimitingInMilliseconds": "0", "retriedDueToRateLimiting": "False", "region": "Australia East", "requestLength": "266", "responseLength": "364", "userAgent": "<empty>"}} ```
+
+ Use the following query to get logs corresponding to Gremlin requests:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="GremlinRequests"
+ ```
* **QueryRuntimeStatistics**: Select this option to log the query text that was executed. This log type is available for SQL API accounts only.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/create-mongodb-go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-go.md
@@ -273,7 +273,7 @@ The `--kind MongoDB` parameter enables MongoDB client connections.
When the Azure Cosmos DB account is created, the Azure CLI shows information similar to the following example. > [!NOTE]
-> This example uses JSON as the Azure CLI output format, which is the default. To use another output format, see [Output formats for Azure CLI commands](/cli/azure/format-output-azure-cli).
+> This example uses JSON as the Azure CLI output format, which is the default. To use another output format, see [Output formats for Azure CLI commands](/cli/azure/format-output-azure-cli).
```json {
@@ -327,7 +327,7 @@ The Azure CLI outputs information similar to the following example.
### Export the connection string, MongoDB database and collection names as environment variables. ```bash
-export MONGODB_CONNECTION_STRING="mongodb://<COSMOSDB_ACCOUNT_NAME>:<COSMOSDB_PASSWORD>@<COSMOSDB_ACCOUNT_NAME>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&maxIdleTimeMS=120000&appName=@<COSMOSDB_ACCOUNT_NAME>@"
+export MONGODB_CONNECTION_STRING="mongodb://<COSMOSDB_ACCOUNT_NAME>:<COSMOSDB_PASSWORD>@<COSMOSDB_ACCOUNT_NAME>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb&maxIdleTimeMS=120000&appName=@<COSMOSDB_ACCOUNT_NAME>@"
``` > [!NOTE]
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-model-partition-example https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-model-partition-example.md
@@ -55,7 +55,7 @@ Here is the list of requests that our platform will have to expose:
At this stage, we haven't thought about the details of what each entity (user, post etc.) will contain. This step is usually among the first ones to be tackled when designing against a relational store, because we have to figure out how those entities will translate in terms of tables, columns, foreign keys etc. It is much less of a concern with a document database that doesn't enforce any schema at write.
-The main reason why it is important to identify our access patterns from the beginning, is because this list of requests is going to be our test suite. Every time we iterate over our data model, we will go through each of the requests and check its performance and scalability.
+The main reason why it is important to identify our access patterns from the beginning, is because this list of requests is going to be our test suite. Every time we iterate over our data model, we will go through each of the requests and check its performance and scalability. We calculate the request units consumed in each model and optimize them. All these models use the default indexing policy and you can override it by indexing specific properties, which can further improve the RU consumption and latency.
## V1: A first version
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/use-python-notebook-features-and-commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/use-python-notebook-features-and-commands.md
@@ -122,6 +122,8 @@ Total RUs consumed : 25022.58
``` With the output statistics, you can calculate the effective RU/s used to upload the items. For example, if 25,000 RUs were consumed over 38 seconds, the effective RU/s is 25,000 RUs / 38 seconds = 658 RU/s.
+You can save files (such as CSV or JSON files) to the local notebook workspace. We recommend that you add a cell in your notebook to save files. You can view these files from the integrated terminal in the notebook environment. You can use the "ls" command to view the saved files. However, these files are removed if you reset the workspace. So, it's best to use persistent storage such as GitHub or a storage account instead of the local workspace.
+ ## Run another notebook in current notebook You can use the ``%%run`` magic command to run another notebook in your workspace from your current notebook. Use the syntax:
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/grant-access-to-create-subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/grant-access-to-create-subscription.md
@@ -4,9 +4,9 @@ description: Learn how to give a user or service principal the ability to progra
author: bandersmsft ms.service: cost-management-billing ms.subservice: billing
-ms.reviewer: amberb
+ms.reviewer: andalmia
ms.topic: conceptual
-ms.date: 08/26/2020
+ms.date: 01/13/2021
ms.author: banders ---
@@ -14,6 +14,9 @@ ms.author: banders
As an Azure customer on [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/), you can give another user or service principal permission to create subscriptions billed to your account. In this article, you learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md) to share the ability to create subscriptions, and how to audit subscription creations. You must have the Owner role on the account you wish to share.
+> [!NOTE]
+> This API only works with the [preview APIs for subscription creation](programmatically-create-subscription-preview.md). If you are want to use the [GA version](programmatically-create-subscription-enterprise-agreement.md), use the latest API version at [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put). If you're migrating to use the newer APIs, you must grant owner permissions again using [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put). Your previous configuration that uses the following APIs doesn't automatically convert for use with newer APIs.
+ [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] ## Grant access
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
@@ -5,7 +5,7 @@ author: bandersmsft
ms.service: cost-management-billing ms.subservice: billing ms.topic: how-to
-ms.date: 11/17/2020
+ms.date: 01/13/2021
ms.reviewer: andalmia ms.author: banders ms.custom: devx-track-azurepowershell, devx-track-azurecli
@@ -26,7 +26,9 @@ When you create an Azure subscription programmatically, that subscription is gov
You must have an Owner role on an Enrollment Account to create a subscription. There are two ways to get the role: * The Enterprise Administrator of your enrollment can [make you an Account Owner](https://ea.azure.com/helpdocs/addNewAccount) (sign in required) which makes you an Owner of the Enrollment Account.
-* An existing Owner of the Enrollment Account can [grant you access](grant-access-to-create-subscription.md). Similarly, to use a service principal to create an EA subscription, you must [grant that service principal the ability to create subscriptions](grant-access-to-create-subscription.md).
+* An existing Owner of the Enrollment Account can [grant you access](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put). Similarly, to use a service principal to create an EA subscription, you must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
+ > [!NOTE]
+ > Ensure that you use the correct API version to give the enrollment account owner permissions. For this article and for the APIs documented in it, use the [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) API. If you're migrating to use the newer APIs, you must grant owner permission again using [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put). Your previous configuration made with the [2015-07-01 version](grant-access-to-create-subscription.md) doesn't automatically convert for use with the newer APIs.
## Find accounts you have access to
databox https://docs.microsoft.com/en-us/azure/databox/data-box-customer-managed-encryption-key-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-customer-managed-encryption-key-portal.md
@@ -98,7 +98,7 @@ To enable a customer-managed key for your existing Data Box order in the Azure p
![Select an identity to use](./media/data-box-customer-managed-encryption-key-portal/customer-managed-key-14.png)
- You can't create a new user identity here. To find out how to create one, see [Create, list, delete or assign a role to a user-assigned managed identity using the Azure portal](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal).
+ You can't create a new user identity here. To find out how to create one, see [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal).
The selected user identity is shown in the **Encryption type** settings.
databox https://docs.microsoft.com/en-us/azure/databox/data-box-deploy-export-ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-export-ordered.md
@@ -177,7 +177,7 @@ Perform the following steps in the Azure portal to order a device.
A user-assigned managed identity is a stand-alone Azure resource that can be used to manage multiple resources. For more information, see [Managed identity types](/azure/active-directory/managed-identities-azure-resources/overview).
- If you need to create a new managed identity, follow the guidance in [Create, list, delete or assign a role to a user-assigned managed identity using the Azure portal](../../articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
+ If you need to create a new managed identity, follow the guidance in [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../../articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
![Select a user identity](./media/data-box-deploy-export-ordered/customer-managed-key-10.png)
databox https://docs.microsoft.com/en-us/azure/databox/data-box-deploy-ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-ordered.md
@@ -332,7 +332,7 @@ Do the following steps in the Azure portal to order a device.
A user-assigned managed identity is a stand-alone Azure resource that can be used to manage multiple resources. For more information, see [Managed identity types](/azure/active-directory/managed-identities-azure-resources/overview).
- If you need to create a new managed identity, follow the guidance in [Create, list, delete or assign a role to a user-assigned managed identity using the Azure portal](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal).
+ If you need to create a new managed identity, follow the guidance in [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal).
![Select a user identity](./media/data-box-deploy-ordered/customer-managed-key-10.png)
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-faq.md
@@ -23,6 +23,13 @@ Distributed denial of service, or DDoS, is a type of attack where an attacker se
## What is Azure DDoS Protection Standard service? Azure DDoS Protection Standard, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. It has several advantages over the basic service, including logging, alerting, and telemetry. See [Azure DDoS Protection Standard overview](ddos-protection-overview.md) for more details. 
+## How does pricing work?
+DDoS protection plans have a fixed monthly charge of $2,944 per month which covers up to 100 public IP addresses. Protection for additional resources will cost an additional $30 per resource per month.
+
+Under a tenant, a single DDoS protection plan can be used across multiple subscriptions, so there is no need to create more than one DDoS protection plan.
+
+See [Azure DDoS Protection Standard pricing](https://azure.microsoft.com/pricing/details/ddos-protection/) for more details.
+ ## What about protection at the service layer (layer 7)? Customers can use Azure DDoS Protection service in combination with a Web Application Firewall (WAF) to for protection both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) as well as third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
@@ -61,4 +68,4 @@ See [testing through simulations](test-through-simulations.md).
## How long does it take for the metrics to load on portal? The metrics should be visible on portal within 5 minutes. If your resource is under attack, other metrics will start showing up on portal within 5-7 minutes.
-
\ No newline at end of file
+
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-protection-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-overview.md
@@ -41,6 +41,10 @@ Azure DDoS protection does not store customer data.
## Pricing
+DDoS protection plans have a fixed monthly charge of $2,944 per month which covers up to 100 public IP addresses. Protection for additional resources will cost an additional $30 per resource per month.
+
+Under a tenant, a single DDoS protection plan can be used across multiple subscriptions, so there is no need to create more than one DDoS protection plan.
+ To learn about Azure DDoS Protection Standard pricing, see [Azure DDoS Protection Standard pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). ## Next steps
devtest-labs https://docs.microsoft.com/en-us/azure/devtest-labs/use-managed-identities-environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/use-managed-identities-environments.md
@@ -14,7 +14,7 @@ As a lab owner, you can use a managed identity to deploy environments in a lab.
## Prerequisites -- [Create, list, delete or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
+- [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
Make sure your managed identity was created in the same region and subscription as your lab. The managed identity does not need to be in the same resource group.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
@@ -8,6 +8,7 @@ ms.author: baanders # Microsoft employees only
ms.date: 3/12/2020 ms.topic: how-to ms.service: digital-twins
+ms.custom: contperf-fy21q3
# Optional fields. Don't forget to remove # if you need a field. # ms.custom: can-be-multiple-comma-separated
@@ -54,9 +55,6 @@ Following this method, you can go on to define models for the hospital's wards,
Once models are created, you can upload them to the Azure Digital Twins instance.
-> [!TIP]
-> It's recommended to validate your models offline before uploading them to your Azure Digital Twins instance. You can use the [DTDL client-side parser library](https://nuget.org/packages/Microsoft.Azure.DigitalTwins.Parser/) and [DTDL Validator sample](/samples/azure-samples/dtdl-validator/dtdl-validator) described in [*How-to: Parse and validate models*](how-to-parse-models.md) to check your models before you upload them to the service.
- When you're ready to upload a model, you can use the following code snippet: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModel":::
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/troubleshoot-known-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-known-issues.md
@@ -6,7 +6,7 @@ ms.author: baanders
ms.topic: troubleshooting ms.service: digital-twins ms.date: 07/14/2020
-ms.custom: contperf-fy21q3
+ms.custom: contperf-fy21q2
--- # Known issues in Azure Digital Twins
event-grid https://docs.microsoft.com/en-us/azure/event-grid/event-schema-iot-hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-iot-hub.md
@@ -2,7 +2,7 @@
title: Azure IoT Hub as Event Grid source description: This article provides the properties and schema for Azure IoT Hub events. It lists the available event types, an example event, and event properties. ms.topic: conceptual
-ms.date: 07/07/2020
+ms.date: 01/13/2021
--- # Azure IoT Hub as an Event Grid source
@@ -22,8 +22,6 @@ Azure IoT Hub emits the following event types:
| Microsoft.Devices.DeviceDisconnected | Published when a device is disconnected from an IoT hub. | | Microsoft.Devices.DeviceTelemetry | Published when a telemetry message is sent to an IoT hub. |
-All device events except device telemetry events are generally available in all regions supported by Event Grid. Device telemetry event is in public preview and is available in all regions except East US, West US, West Europe, [Azure Government](../azure-government/documentation-government-welcome.md), [Azure China 21Vianet](/azure/china/china-welcome), and [Azure Germany](https://azure.microsoft.com/global-infrastructure/germany/).
- ### Example event The schema for DeviceConnected and DeviceDisconnected events have the same structure. This sample event shows the schema of an event raised when a device is connected to an IoT hub:
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-metrics-azure-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-metrics-azure-monitor.md
@@ -15,7 +15,7 @@ Azure Monitor provides unified user interfaces for monitoring across various Azu
Azure Monitor provides multiple ways to access metrics. You can either access metrics through the [Azure portal](https://portal.azure.com), or use the Azure Monitor APIs (REST and .NET) and analysis solutions such as Log Analytics and Event Hubs. For more information, see [Monitoring data collected by Azure Monitor](../azure-monitor/platform/data-platform.md).
-Metrics are enabled by default, and you can access the most recent 30 days of data. If you need to retain data for a longer period of time, you can archive metrics data to an Azure Storage account. This is configured in [diagnostic settings](../azure-monitor/platform/diagnostic-settings.md) in Azure Monitor.
+Metrics are enabled by default, and you can access the most recent 30 days of data. If you need to keep data for a longer period of time, you can archive metrics data to an Azure Storage account. This setting can be configured in [diagnostic settings](../azure-monitor/platform/diagnostic-settings.md) in Azure Monitor.
## Access metrics in the portal
@@ -24,7 +24,7 @@ You can monitor metrics over time in the [Azure portal](https://portal.azure.com
![View successful metrics][1]
-You can also access metrics directly via the namespace. To do so, select your namespace and then click **Metrics**. To display metrics filtered to the scope of the event hub, select the event hub and then click **Metrics**.
+You can also access metrics directly via the namespace. To do so, select your namespace and then select **Metrics**. To display metrics filtered to the scope of the event hub, select the event hub and then select **Metrics**.
For metrics supporting dimensions, you must filter with the desired dimension value as shown in the following example:
@@ -32,7 +32,7 @@ For metrics supporting dimensions, you must filter with the desired dimension va
## Billing
-Using metrics in Azure Monitor is currently free. However, if you use additional solutions that ingest metrics data, you may be billed by these solutions. For example, you are billed by Azure Storage if you archive metrics data to an Azure Storage account. You are also billed by Azure if you stream metrics data to Azure Monitor logs for advanced analysis.
+Using metrics in Azure Monitor is currently free. However, if you use other solutions that ingest metrics data, you may be billed by these solutions. For example, you are billed by Azure Storage if you archive metrics data to an Azure Storage account. You are also billed by Azure if you stream metrics data to Azure Monitor logs for advanced analysis.
The following metrics give you an overview of the health of your service.
@@ -44,8 +44,11 @@ All metrics values are sent to Azure Monitor every minute. The time granularity
## Azure Event Hubs metrics For a list of metrics supported by the service, see [Azure Event Hubs](../azure-monitor/platform/metrics-supported.md#microsofteventhubnamespaces)
+> [!NOTE]
+> When a user error occurs, Azure Event Hubs updates the **User Errors** metric, but doesn't log any other diagnostic information. Therefore, you need to capture details on user errors in your applications. Or, you can also convert the telemetry generated when messages are sent or received into application insights. For an example, see [Tracking with Application Insights](../service-bus-messaging/service-bus-end-to-end-tracing.md#tracking-with-azure-application-insights).
+ ## Azure Monitor integration with SIEM tools
-Routing your monitoring data (activity logs, diagnostics logs, etc.) to an event hub with Azure Monitor enables you to easily integrate with Security Information and Event Management (SIEM) tools. For more information, see the following articles/blog posts:
+Routing your monitoring data (activity logs, diagnostics logs, and so on.) to an event hub with Azure Monitor enables you to easily integrate with Security Information and Event Management (SIEM) tools. For more information, see the following articles/blog posts:
- [Stream Azure monitoring data to an event hub for consumption by an external tool](../azure-monitor/platform/stream-monitoring-data-event-hubs.md) - [Introduction to Azure Log Integration](/previous-versions/azure/security/fundamentals/azure-log-integration-overview)
@@ -53,8 +56,8 @@ Routing your monitoring data (activity logs, diagnostics logs, etc.) to an event
In the scenario where an SIEM tool consumes log data from an event hub, if you see no incoming messages or you see incoming messages but no outgoing messages in the metrics graph, follow these steps: -- If there are **no incoming messages**, it means that the Azure Monitor service is not moving audit/diagnostics logs into the event hub. Open a support ticket with the Azure Monitor team in this scenario. -- if there are incoming messages, but **no outgoing messages**, it means that the SIEM application is not reading the messages. Contact the SIEM provider to determine whether the configuration of the event hub those applications is correct.
+- If there are **no incoming messages**, it means that the Azure Monitor service isn't moving audit/diagnostics logs into the event hub. Open a support ticket with the Azure Monitor team in this scenario.
+- if there are incoming messages, but **no outgoing messages**, it means that the SIEM application isn't reading the messages. Contact the SIEM provider to determine whether the configuration of the event hub those applications is correct.
## Next steps
expressroute https://docs.microsoft.com/en-us/azure/expressroute/about-fastpath https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/about-fastpath.md
@@ -29,7 +29,7 @@ To configure FastPath, the virtual network gateway must be either:
* Ultra Performance * ErGw3AZ
-## Supported features
+## Limitations
While FastPath supports most configurations, it does not support the following features:
@@ -43,4 +43,4 @@ While FastPath supports most configurations, it does not support the following f
## Next steps
-To enable FastPath, see [Link a virtual network to ExpressRoute](expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
\ No newline at end of file
+To enable FastPath, see [Link a virtual network to ExpressRoute](expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
guides https://docs.microsoft.com/en-us/azure/guides/developer/azure-developer-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/guides/developer/azure-developer-guide.md
@@ -103,6 +103,21 @@ Service Fabric supports WebAPI with Open Web Interface for .NET (OWIN) and ASP.N
> > **Get started:** [Create your first Azure Service Fabric application](../../service-fabric/service-fabric-tutorial-create-dotnet-app.md).
+#### Azure Spring Cloud
+
+Azure Spring Cloud is a serverless microservices platform that enables you to build, deploy, scale and monitor your applications in the cloud. Use Spring Cloud to bring modern microservice patterns to Spring Boot apps, eliminating boilerplate code to quickly build robust Java apps.
+
+- Leverage managed versions of Spring Cloud Service Discovery and Config Server, while we ensure those critical components are running in optimum conditions.
+- Focus on building your business logic and we will take care of your service runtime with security patches, compliance standards and high availability.
+- Manage application lifecycle (e.g.: deploy, start, stop, scale) on top of Azure Kubernetes Service.
+- Easily bind connections between your apps and Azure services such as Azure Database for MySQL and Azure Cache for Redis.
+- Monitor and troubleshoot microservices and applications using enterprise-grade unified monitoring tools that offer deep insights on application dependencies and operational telemetry.
+
+> **When to use:** As a fully managed service Azure Spring Cloud is a good choice when youΓÇÖre minimizing operational cost running Spring Boot/Spring Cloud based microservices on Azure.
+>
+> **Get started:** [Deploy your first Azure Spring Cloud application](../../spring-cloud/spring-cloud-quickstart.md).
++ ### Enhance your applications with Azure services Along with application hosting, Azure provides service offerings that can enhance the functionality. Azure can also improve the development and maintenance of your applications, both in the cloud and on-premises.
@@ -333,4 +348,4 @@ Azure provides a set of Billing REST APIs that give access to resource consumpti
Although it's challenging to estimate costs ahead of time, Azure has tools that can help. It has a [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help estimate the cost of deployed resources. You can also use the Billing resources in the portal and the Billing REST APIs to estimate future costs, based on current consumption.
->**Get started**: See [Azure Billing Usage and RateCard APIs overview](../../cost-management-billing/manage/usage-rate-card-overview.md).
\ No newline at end of file
+>**Get started**: See [Azure Billing Usage and RateCard APIs overview](../../cost-management-billing/manage/usage-rate-card-overview.md).
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-manage-ambari https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-manage-ambari.md
@@ -7,7 +7,7 @@ ms.reviewer: jasonh
ms.service: hdinsight ms.topic: how-to ms.custom: hdinsightactive,seoapr2020
-ms.date: 04/16/2020
+ms.date: 01/12/2021
--- # Manage HDInsight clusters by using the Apache Ambari Web UI
@@ -134,7 +134,7 @@ Selecting any of these links opens a new tab in your browser, which displays the
Working with users, groups, and permissions is supported. For local administration, see [Authorize users for Apache Ambari Views](./hdinsight-authorize-users-to-ambari.md). For domain-joined clusters, see [Manage domain-joined HDInsight clusters](./domain-joined/hdinsight-security-overview.md). > [!WARNING]
-> Do not change the password of the Ambari watchdog (hdinsightwatchdog) on your Linux-based HDInsight cluster. Changing the password breaks the ability to use script actions or perform scaling operations with your cluster.
+> Do not delete or change the password of the Ambari watchdog (hdinsightwatchdog) on your Linux-based HDInsight cluster. Changing the password breaks the ability to use script actions or perform scaling operations with your cluster.
### Hosts
@@ -215,4 +215,4 @@ The following Ambari operations aren't supported on HDInsight:
* [Apache Ambari REST API](hdinsight-hadoop-manage-ambari-rest-api.md) with HDInsight. * [Use Apache Ambari to optimize HDInsight cluster configurations](./hdinsight-changing-configs-via-ambari.md)
-* [Scale Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
\ No newline at end of file
+* [Scale Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/tutorial-nested-iot-edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-nested-iot-edge.md
@@ -59,6 +59,13 @@ To create a hierarchy of IoT Edge devices, you will need:
--admin-password <REPLACE_WITH_PASSWORD> ```
+* Make sure that the following ports are open inbound: 8000, 443, 5671, 8883:
+ * 8000: Used to pull Docker container images through the API proxy.
+ * 443: Used between parent and child edge hubs for REST API calls.
+ * 5671, 8883: Used for AMQP and MQTT.
+
+ For more information, see [How to open ports to a virtual machine with the Azure portal](../virtual-machines/windows/nsg-quickstart-portal.md).
+ You can also try out this scenario by following the scripted [Azure IoT Edge for Industrial IoT sample](https://aka.ms/iotedge-nested-sample), which deploys Azure virtual machines as preconfigured devices to simulate a factory environment. ## Configure your IoT Edge device hierarchy
@@ -177,6 +184,39 @@ Each device needs a copy of the root CA certificate and a copy of its own device
Install IoT Edge by following these steps on both devices.
+1. Install the repository configuration that matches your device operating system.
+
+ * **Ubuntu Server 16.04**:
+
+ ```bash
+ curl https://packages.microsoft.com/config/ubuntu/16.04/multiarch/prod.list > ./microsoft-prod.list
+ ```
+
+ * **Ubuntu Server 18.04**:
+
+ ```bash
+ curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
+ ```
+
+ * **Raspberry Pi OS Stretch**:
+
+ ```bash
+ curl https://packages.microsoft.com/config/debian/stretch/multiarch/prod.list > ./microsoft-prod.list
+ ```
+
+1. Copy the generated list to the sources.list.d directory.
+
+ ```bash
+ sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
+ ```
+
+1. Install the Microsoft GPG public key.
+
+ ```bash
+ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+ sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
+ ```
+
1. Update package lists on your device. ```bash
@@ -192,7 +232,7 @@ Install IoT Edge by following these steps on both devices.
1. Install the hsmlib and IoT Edge daemon. To see the assets for other Linux distributions, [visit the GitHub release](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0-rc1). <!-- Update with proper image links on release --> ```bash
- curl -L https://github.com/Azure/azure-iotedge/releases/download/1.2.0-rc1/libiothsm-std_1.2.0.rc1-1-1_debian9_amd64.deb -o libiothsm-std.deb
+ curl -L https://github.com/Azure/azure-iotedge/releases/download/1.2.0-rc1/libiothsm-std_1.2.0_rc1-1-1_debian9_amd64.deb -o libiothsm-std.deb
curl -L https://github.com/Azure/azure-iotedge/releases/download/1.2.0-rc1/iotedge_1.2.0_rc1-1_debian9_amd64.deb -o iotedge.deb sudo dpkg -i ./libiothsm-std.deb sudo dpkg -i ./iotedge.deb
@@ -589,6 +629,28 @@ Notice that the image URI that we used for the simulated temperature sensor modu
On the device details page for your lower layer IoT Edge device, you should now see the temperature sensor module listed along the system modules as **Specified in deployment**. It may take a few minutes for the device to receive its new deployment, request the container image, and start the module. Refresh the page until you see the temperature sensor module listed as **Reported by device**.
+## IotEdge check
+
+Run the `iotedge check` command to verify the configuration and to troubleshoot errors.
+
+You can run `iotedge check` in a nested hierarchy, even if the child machines don't have direct internet access.
+
+When you run `iotedge check` from the lower layer, the program tries to pull the image from the parent through port 443.
+
+In this tutorial, we use port 8000, so we need to specify it:
+
+```bash
+sudo iotedge check --diagnostics-image-name <parent_device_fqdn_or_ip>:8000/azureiotedge-diagnostics:1.2.0-rc2
+```
+
+The `azureiotedge-diagnostics` value is pulled from the container registry that's linked with the registry module. This tutorial has it set by default to https://mcr.microsoft.com:
+
+| Name | Value |
+| - | - |
+| `REGISTRY_PROXY_REMOTEURL` | `https://mcr.microsoft.com` |
+
+If you're using a private container registry, make sure that all the images (for example, IoTEdgeAPIProxy, edgeAgent, edgeHub, and diagnostics) are present in the container registry.
+
## View generated data The **Simulated Temperature Sensor** module that you pushed generates sample environment data. It sends messages that include ambient temperature and humidity, machine temperature and pressure, and a timestamp.
@@ -620,4 +682,4 @@ In this tutorial, you configured two IoT Edge devices as gateways and set one as
To see how Azure IoT Edge can create more solutions for your business, continue on to the other tutorials. > [!div class="nextstepaction"]
-> [Deploy an Azure Machine Learning model as a module](tutorial-deploy-machine-learning.md)
\ No newline at end of file
+> [Deploy an Azure Machine Learning model as a module](tutorial-deploy-machine-learning.md)
lab-services https://docs.microsoft.com/en-us/azure/lab-services/get-started-manage-labs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/get-started-manage-labs.md
@@ -104,56 +104,7 @@ Teachers are able to connect to a student VM as long as it is turned on, and the
## Manage users in a lab
-Teachers are able to add student users to a lab and monitor their hour quotas.
-
-### Add users by email address
-
-1. From the [Azure Lab services website](https://labs.azure.com/) click **Users** from the left-hand side of the window.
-1. At the top of the window, click on **Add users** and select **Add by email address**.
-1. In the **Add users** pane that appears on the right, enter the studentsΓÇÖ email addresses on separate lines or on a single line, separated by semicolons.
-1. Click **Save**.
-1. Your list of users will now be updated with emails, status, invitation, and quota hours.
-
- After students are registered for a lab, their names will be updated with first and last names from Azure Active Directory.
-
- > [!NOTE]
- > Keep the Restrict access option toggle is turned on for users. This means that only users that you list can register with the lab by using the registration link you send.
-
-### Add users using a spreadsheet
-
-You can also add users by uploading a CSV file that contains their email addresses.
-
-1. In Microsoft Excel, create a CSV file that lists students' email addresses in one column.
-1. From the [Azure Lab Services website](https://labs.azure.com/), at the top of the **Users** page, click the **Add Users** button.
-1. Select **Upload CSV**.
-1. Select the CSV file that contains the students' email addresses and click **Open**.
-
- :::image type="content" source="./media/get-started-manage-labs/add-users-spreadsheet.png" alt-text="Add users using a spreadsheet":::
-1. The emails will now appear in the window on the right. Click **Save**.
-
- :::image type="content" source="./media/get-started-manage-labs/register-users.png" alt-text="Register users":::
-
-### Register users
-
-Once users have been added to the lab, they will need to register in order to access the VMs. This can be done by either inviting users from the portal, which will send an email containing the registration link for the lab. Or by copying and pasting the registration link into an email, or other form of communication with the students.
-
-1. From the **Users** page, select a student or multiple students in the list.
-
- In the row for the student you've selected, select the envelope icon in the list or, clicking **Invite** at the top of the screen.
-
- :::image type="content" source="./media/get-started-manage-labs/send-invitation.png" alt-text="Send an invitation":::
-
- In the **Send invitation** by email window, enter an optional message (like a username and password) to students, and then click **Send**.
-
- :::image type="content" source="./media/get-started-manage-labs/send-invitation-mail.png" alt-text="Send an invitation by mail":::
-
- Alternatively, from the same **Users** page, you can click the **Registration link** button at the top of the screen.
-
- :::image type="content" source="./media/get-started-manage-labs/registration-link.png" alt-text="User registration link":::
-
- Copy the registration link from the text field and paste it into email or your preferred secure messaging tool.
-
- :::image type="content" source="./media/get-started-manage-labs/user-registration.png" alt-text="Send user registration":::
+Teachers are able to add student users to a lab and monitor their hour quotas. For details on how to add users by email address or by using a spreadsheet list and register users, see [Add and manage lab users](how-to-configure-student-usage.md).
After you have invited users or shared the link, you will be able to monitor which users have registered successfully in the **Users** page in the **Status** column.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-ml-pipelines.md
@@ -1,44 +1,33 @@
---
-title: 'What are Azure Machine Learning Pipelines'
+title: 'What are machine learning pipelines?'
titleSuffix: Azure Machine Learning
-description: Learn how machine learning (ML) pipelines help you build, optimize, and manage machine learning workflows.
+description: Learn how machine learning pipelines help you build, optimize, and manage machine learning workflows.
services: machine-learning ms.service: machine-learning ms.subservice: core ms.topic: conceptual ms.author: laobri author: lobrien
-ms.date: 01/11/2021
+ms.date: 01/12/2021
ms.custom: devx-track-python --- # What are Azure Machine Learning pipelines?
-In this article, you learn how Azure Machine Learning pipelines help you build, optimize, and manage machine learning workflows. These workflows have a number of benefits:
-
-+ Simplicity
-+ Speed
-+ Repeatability
-+ Flexibility
-+ Versioning and tracking
-+ Modularity
-+ Quality assurance
-+ Cost control
-
-These benefits become significant as soon as your machine learning project moves beyond pure exploration and into iteration. Even simple one-step pipelines can be valuable. Machine learning projects are often in a complex state, and it can be a relief to make the precise accomplishment of a single workflow a trivial process.
+In this article, you learn how a machine learning pipeline helps you build, optimize, and manage your machine learning workflow.
<a name="compare"></a>
-### Which Azure pipeline technology should I use?
+## Which Azure pipeline technology should I use?
-The Azure cloud provides several other pipelines, each with a different purpose. The following table lists the different pipelines and what they are used for:
+The Azure cloud provides several types of pipeline, each with a different purpose. The following table lists the different pipelines and what they are used for:
| Scenario | Primary persona | Azure offering | OSS offering | Canonical pipe | Strengths | | -------- | --------------- | -------------- | ------------ | -------------- | --------- | | Model orchestration (Machine learning) | Data scientist | Azure Machine Learning Pipelines | Kubeflow Pipelines | Data -> Model | Distribution, caching, code-first, reuse |
-| Data orchestration (Data prep) | Data engineer | [Azure Data Factory pipelines](../data-factory/concepts-pipelines-activities.md) | Apache Airflow | Data -> Data | Strongly-typed movement, data-centric activities |
+| Data orchestration (Data prep) | Data engineer | [Azure Data Factory pipelines](../data-factory/concepts-pipelines-activities.md) | Apache Airflow | Data -> Data | Strongly typed movement, data-centric activities |
| Code & app orchestration (CI/CD) | App Developer / Ops | [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/) | Jenkins | Code + Model -> App/Service | Most open and flexible activity support, approval queues, phases with gating |
-## What can Azure ML pipelines do?
+## What can machine learning pipelines do?
An Azure Machine Learning pipeline is an independently executable workflow of a complete machine learning task. Subtasks are encapsulated as a series of steps within the pipeline. An Azure Machine Learning pipeline can be as simple as one that calls a Python script, so _may_ do just about anything. Pipelines _should_ focus on machine learning tasks such as:
@@ -59,9 +48,9 @@ In short, all of the complex tasks of the machine learning lifecycle can be help
### Analyzing dependencies
-Many programming ecosystems have tools that orchestrate resource, library, or compilation dependencies. Generally, these tools use file timestamps to calculate dependencies. When a file is changed, only it and its dependents are updated (downloaded, recompiled, or packaged). Azure ML pipelines extend this concept. Like traditional build tools, pipelines calculate dependencies between steps and only perform the necessary recalculations.
+Many programming ecosystems have tools that orchestrate resource, library, or compilation dependencies. Generally, these tools use file timestamps to calculate dependencies. When a file is changed, only it and its dependents are updated (downloaded, recompiled, or packaged). Azure Machine Learning pipelines extend this concept. Like traditional build tools, pipelines calculate dependencies between steps and only perform the necessary recalculations.
-The dependency analysis in Azure ML pipelines is more sophisticated than simple timestamps though. Every step may run in a different hardware and software environment. Data preparation might be a time-consuming process but not need to run on hardware with powerful GPUs, certain steps might require OS-specific software, you might want to use distributed training, and so forth.
+The dependency analysis in Azure Machine Learning pipelines is more sophisticated than simple timestamps though. Every step may run in a different hardware and software environment. Data preparation might be a time-consuming process but not need to run on hardware with powerful GPUs, certain steps might require OS-specific software, you might want to use distributed training, and so forth.
Azure Machine Learning automatically orchestrates all of the dependencies between pipeline steps. This orchestration might include spinning up and down Docker images, attaching and detaching compute resources, and moving data between the steps in a consistent and automatic manner.
@@ -87,7 +76,7 @@ When you create and run a `Pipeline` object, the following high-level steps occu
In the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py), a pipeline is a Python object defined in the `azureml.pipeline.core` module. A [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline%28class%29?preserve-view=true&view=azure-ml-py) object contains an ordered sequence of one or more [PipelineStep](/python/api/azureml-pipeline-core/azureml.pipeline.core.builder.pipelinestep?preserve-view=true&view=azure-ml-py) objects. The `PipelineStep` class is abstract and the actual steps will be of subclasses such as [EstimatorStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.estimatorstep?preserve-view=true&view=azure-ml-py), [PythonScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.pythonscriptstep?preserve-view=true&view=azure-ml-py), or [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep?preserve-view=true&view=azure-ml-py). The [ModuleStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.modulestep?preserve-view=true&view=azure-ml-py) class holds a reusable sequence of steps that can be shared among pipelines. A `Pipeline` runs as part of an `Experiment`.
-An Azure ML pipeline is associated with an Azure Machine Learning workspace and a pipeline step is associated with a compute target available within that workspace. For more information, see [Create and manage Azure Machine Learning workspaces in the Azure portal](./how-to-manage-workspace.md) or [What are compute targets in Azure Machine Learning?](./concept-compute-target.md).
+An Azure machine learning pipeline is associated with an Azure Machine Learning workspace and a pipeline step is associated with a compute target available within that workspace. For more information, see [Create and manage Azure Machine Learning workspaces in the Azure portal](./how-to-manage-workspace.md) or [What are compute targets in Azure Machine Learning?](./concept-compute-target.md).
### A simple Python Pipeline
@@ -124,7 +113,7 @@ pipeline_run = experiment.submit(pipeline)
pipeline_run.wait_for_completion() ```
-The snippet starts with common Azure Machine Learning objects, a `Workspace`, a `Datastore`, a [ComputeTarget](/python/api/azureml-core/azureml.core.computetarget?preserve-view=true&view=azure-ml-py), and an `Experiment`. Then, the code creates the objects to hold `input_data` and `output_data`. The `input_data` is an instance of [FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py&preserve-view=true) and the `output_data` is an instance of [OutputFileDatasetConfig](https://docs.microsoft.com/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig?view=azure-ml-py&preserve-view=true). For `OutputFileDatasetConfig` the default behavior is to copy the output to the `workspaceblobstore` datastore under the path `/dataset/{run-id}/{output-name}`, where `run-id` is the Run's ID and `output-name` is an auto-generated value if not specified by the developer.
+The snippet starts with common Azure Machine Learning objects, a `Workspace`, a `Datastore`, a [ComputeTarget](/python/api/azureml-core/azureml.core.computetarget?preserve-view=true&view=azure-ml-py), and an `Experiment`. Then, the code creates the objects to hold `input_data` and `output_data`. The `input_data` is an instance of [FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py&preserve-view=true) and the `output_data` is an instance of [OutputFileDatasetConfig](https://docs.microsoft.com/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig?view=azure-ml-py&preserve-view=true). For `OutputFileDatasetConfig` the default behavior is to copy the output to the `workspaceblobstore` datastore under the path `/dataset/{run-id}/{output-name}`, where `run-id` is the Run's ID and `output-name` is an autogenerated value if not specified by the developer.
The array `steps` holds a single element, a `PythonScriptStep` that will use the data objects and run on the `compute_target`. Then, the code instantiates the `Pipeline` object itself, passing in the workspace and steps array. The call to `experiment.submit(pipeline)` begins the Azure ML pipeline run. The call to `wait_for_completion()` blocks until the pipeline is finished.
@@ -153,8 +142,7 @@ The key advantages of using pipelines for your machine learning workflows are:
## Next steps
-Azure ML pipelines are a powerful facility that begins delivering value in the early development stages. The value increases as the team and project grows. This article has explained how pipelines are specified with the Azure Machine Learning Python SDK and orchestrated on Azure. You've seen some simple source code and been introduced to a few of the `PipelineStep` classes that are available. You should have a sense of when to use Azure ML pipelines and how Azure runs them.
-
+Azure Machine Learning pipelines are a powerful facility that begins delivering value in the early development stages. The value increases as the team and project grows. This article has explained how pipelines are specified with the Azure Machine Learning Python SDK and orchestrated on Azure. You've seen some simple source code and been introduced to a few of the `PipelineStep` classes that are available. You should have a sense of when to use Azure Machine Learning pipelines and how Azure runs them.
+ Learn how to [create your first pipeline](how-to-create-your-first-pipeline.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-cross-validation-data-splits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
@@ -1,7 +1,7 @@
---
-title: Configure cross-validation and data splits in automated machine learning experiments
+title: Data splits and cross-validation in automated machine learning
titleSuffix: Azure Machine Learning
-description: Learn how to configure cross-validation and dataset splits for automated machine learning experiments
+description: Learn how to configure dataset splits and cross-validation for automated machine learning experiments
services: machine-learning ms.service: machine-learning ms.subservice: core
@@ -16,11 +16,11 @@ ms.date: 06/16/2020
# Configure data splits and cross-validation in automated machine learning
-In this article, you learn the different options for configuring training/validation data splits and cross-validation for your automated machine learning, AutoML, experiments.
+In this article, you learn the different options for configuring training/validation data splits and cross-validation for your automated machine learning, automated ML, experiments.
-In Azure Machine Learning, when you use AutoML to build multiple ML models, each child run needs to validate the related model by calculating the quality metrics for that model, such as accuracy or AUC weighted. These metrics are calculated by comparing the predictions made with each model with real labels from past observations in the validation data.
+In Azure Machine Learning, when you use automated ML to build multiple ML models, each child run needs to validate the related model by calculating the quality metrics for that model, such as accuracy or AUC weighted. These metrics are calculated by comparing the predictions made with each model with real labels from past observations in the validation data.
-AutoML experiments perform model validation automatically. The following sections describe how you can further customize validation settings with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py).
+Automated ML experiments perform model validation automatically. The following sections describe how you can further customize validation settings with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py).
For a low-code or no-code experience, see [Create your automated machine learning experiments in Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md).
@@ -35,13 +35,13 @@ For this article you need,
* Familiarity with setting up an automated machine learning experiment with the Azure Machine Learning SDK. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the fundamental automated machine learning experiment design patterns.
-* An understanding of cross-validation and train/validation data splits as ML concepts. For a high-level explanation,
+* An understanding of train/validation data splits and cross-validation as machine learning concepts. For a high-level explanation,
* [About Train, Validation and Test Sets in Machine Learning](https://towardsdatascience.com/train-validation-and-test-sets-72cb40cba9e7)
- * [Understanding Cross Validation](https://towardsdatascience.com/understanding-cross-validation-419dbd47e9bd)
+ * [Understand Cross Validation in machine learning](https://towardsdatascience.com/understanding-cross-validation-419dbd47e9bd)
-## Default data splits and cross-validation
+## Default data splits and cross-validation
Use the [AutoMLConfig](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?preserve-view=true&view=azure-ml-py) object to define your experiment and training settings. In the following code snippet, notice that only the required parameters are defined, that is the parameters for `n_cross_validation` or `validation_ data` are **not** included.
@@ -113,7 +113,7 @@ To perform cross-validation, include the `n_cross_validations` parameter and set
In the following code, five folds for cross-validation are defined. Hence, five different trainings, each training using 4/5 of the data, and each validation using 1/5 of the data with a different holdout fold each time.
-As a result, metrics are calculated with the average of the 5 validation metrics.
+As a result, metrics are calculated with the average of the five validation metrics.
```python data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
@@ -131,7 +131,7 @@ automl_config = AutoMLConfig(compute_target = aml_remote_compute,
## Specify custom cross-validation data folds
-You can also provide your own cross-validation (CV) data folds. This is considered a more advanced scenario because you are specifying which columns to split and use for validation. Include custom CV split columns in your training data, and specify which columns by populating the column names in the `cv_split_column_names` parameter. Each column represents one cross-validation split, and is filled with integer values 1 or 0 --where 1 indicates the row should be used for training and 0 indicates the row should be used for validation.
+You can also provide your own cross-validation (CV) data folds. This is considered a more advanced scenario because you are specifying which columns to split and use for validation. Include custom CV split columns in your training data, and specify which columns by populating the column names in the `cv_split_column_names` parameter. Each column represents one cross-validation split, and is filled with integer values 1 or 0--where 1 indicates the row should be used for training and 0 indicates the row should be used for validation.
The following code snippet contains bank marketing data with two CV split columns 'cv1' and 'cv2'.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
@@ -1,21 +1,21 @@
---
-title: How and where to deploy models
+title: How to deploy machine learning models
titleSuffix: Azure Machine Learning
-description: 'Learn how and where to deploy your Azure Machine Learning models, including Azure Container Instances, Azure Kubernetes Service, Azure IoT Edge, and FPGA.'
+description: 'Learn how and where to deploy machine learning models. Deploy to Azure Container Instances, Azure Kubernetes Service, Azure IoT Edge, and FPGA.'
services: machine-learning ms.service: machine-learning ms.subservice: core ms.author: gopalv author: gvashishtha ms.reviewer: larryfr
-ms.date: 12/11/2020
+ms.date: 01/13/2021
ms.topic: conceptual ms.custom: how-to, devx-track-python, deploy, devx-track-azurecli ---
-# Deploy models with Azure Machine Learning
+# Deploy machine learning models to Azure
-Learn how to deploy your machine learning model as a web service in the Azure cloud or to Azure IoT Edge devices.
+Learn how to deploy your machine learning or deep learning model as a web service in the Azure cloud. You can also deploy to Azure IoT Edge devices.
The workflow is similar no matter where you deploy your model:
@@ -26,7 +26,7 @@ The workflow is similar no matter where you deploy your model:
1. Deploy the model to the compute target. 1. Test the resulting web service.
-For more information on the concepts involved in the deployment workflow, see [Manage, deploy, and monitor models with Azure Machine Learning](concept-model-management-and-deployment.md).
+For more information on the concepts involved in the machine learning deployment workflow, see [Manage, deploy, and monitor models with Azure Machine Learning](concept-model-management-and-deployment.md).
## Prerequisites
@@ -192,7 +192,7 @@ A minimal inference configuration can be written as:
} ```
-This specifies that the deployment will use the file `score.py` in the `./working_dir` directory to process incoming requests.
+This specifies that the machine learning deployment will use the file `score.py` in the `./working_dir` directory to process incoming requests.
[See this article](./reference-azure-machine-learning-cli.md#inference-configuration-schema) for a more thorough discussion of inference configurations.
@@ -264,7 +264,7 @@ from azureml.core.webservice import AciWebservice, AksWebservice, LocalWebservic
---
-## Deploy your model
+## Deploy your machine learning model
You are now ready to deploy your model.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-setup-local.md
@@ -49,18 +49,20 @@ If this command returns a `conda not found` error, [download and install Minicon
Once you have installed Conda, use a terminal or Anaconda Prompt window to create a new environment: ```bash
-conda create -n tutorial python=3.7
+conda create -n tutorial python=3.8
``` Next, install the Azure Machine Learning SDK into the conda environment you created: ```bash conda activate tutorial
-pip install azureml-sdk
+pip install azureml-core
``` > [!NOTE]
-> It takes approximately 5 minutes for the Azure Machine Learning SDK install to complete.
+> It takes approximately 2 minutes for the Azure Machine Learning SDK install to complete.
+>
+> If you get a timeout error, try `pip install --default-timeout=100 azureml-core` intstead.
> [!div class="nextstepaction"]
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
@@ -35,7 +35,8 @@ In this tutorial, you:
## Prerequisites
-* Completion of [part 2](tutorial-1st-experiment-hello-world.md) of the series.
+- [Anaconda](https://www.anaconda.com/download/) or [Miniconda](https://www.anaconda.com/download/) to manage Python virtual environments and install packages.
+- Completion of [part1](tutorial-1st-experiment-sdk-setup-local.md) and [part 2](tutorial-1st-experiment-hello-world.md) of the series.
## Create training scripts
@@ -82,7 +83,7 @@ This environment has all the dependencies that your model and training script re
## <a name="test-local"></a> Test locally
-Use the following code to test your script locally in the new environment.
+In a terminal or Anaconda Prompt window, use the following code to test your script locally in the new environment.
```bash conda deactivate # If you are still using the tutorial environment, exit it
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-auto-train-models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-auto-train-models.md
@@ -254,7 +254,7 @@ After starting the experiment, the output shown updates live as the experiment r
```python from azureml.core.experiment import Experiment
-experiment = Experiment(ws, "taxi-experiment")
+experiment = Experiment(ws, "Tutorial-NYCTaxi")
local_run = experiment.submit(automl_config, show_output=True) ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-pipeline-batch-scoring-classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-pipeline-batch-scoring-classification.md
@@ -336,7 +336,7 @@ from azureml.core import Experiment
from azureml.pipeline.core import Pipeline pipeline = Pipeline(workspace=ws, steps=[batch_score_step])
-pipeline_run = Experiment(ws, 'batch_scoring').submit(pipeline)
+pipeline_run = Experiment(ws, 'Tutorial-Batch-Scoring').submit(pipeline)
pipeline_run.wait_for_completion(show_output=True) ```
@@ -400,7 +400,7 @@ import requests
rest_endpoint = published_pipeline.endpoint response = requests.post(rest_endpoint, headers=auth_header,
- json={"ExperimentName": "batch_scoring",
+ json={"ExperimentName": "Tutorial-Batch-Scoring",
"ParameterAssignments": {"process_count_per_node": 6}}) run_id = response.json()["Id"] ```
@@ -413,7 +413,7 @@ The new run will look similar to the pipeline you ran earlier in the tutorial. Y
from azureml.pipeline.core.run import PipelineRun from azureml.widgets import RunDetails
-published_pipeline_run = PipelineRun(ws.experiments["batch_scoring"], run_id)
+published_pipeline_run = PipelineRun(ws.experiments["Tutorial-Batch-Scoring"], run_id)
RunDetails(published_pipeline_run).show() ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-train-deploy-image-classification-model-vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
@@ -75,7 +75,7 @@ One or more experiments can be created in your workspace to track and analyze in
> [!div class="mx-imgBorder"] > ![Create an experiment](./media/tutorial-train-deploy-image-classification-model-vscode/create-experiment.png)
-1. Name your experiment "MNIST" and press **Enter** to create the new experiment.
+1. Name your experiment "Tutorial-VSCode-MNIST" and press **Enter** to create the new experiment.
Like workspaces, a request is sent to Azure to create an experiment with the provided configurations. After a few minutes, the new experiment appears in the *Experiments* node of your workspace.
@@ -410,4 +410,4 @@ At this point, a request is sent to Azure to deploy your web service. This proce
## Next steps * For a walkthrough of how to train with Azure Machine Learning outside of Visual Studio Code, see [Tutorial: Train models with Azure Machine Learning](tutorial-train-models-with-aml.md).
-* For a walkthrough of how to edit, run, and debug code locally, see the [Python hello-world tutorial](https://code.visualstudio.com/docs/Python/Python-tutorial).
\ No newline at end of file
+* For a walkthrough of how to edit, run, and debug code locally, see the [Python hello-world tutorial](https://code.visualstudio.com/docs/Python/Python-tutorial).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-train-models-with-aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-models-with-aml.md
@@ -95,7 +95,7 @@ Create an experiment to track the runs in your workspace. A workspace can have m
```python from azureml.core import Experiment
-experiment_name = 'sklearn-mnist'
+experiment_name = 'Tutorial-sklearn-mnist'
exp = Experiment(workspace=ws, name=experiment_name) ```
marketplace https://docs.microsoft.com/en-us/azure/marketplace/gtm-your-marketplace-benefits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/gtm-your-marketplace-benefits.md
@@ -4,7 +4,7 @@ description: Go-To-Market Services - Microsoft resources that publishers can use
ms.service: marketplace ms.subservice: partnercenter-marketplace-publisher ms.topic: article
-ms.date: 09/24/2020
+ms.date: 01/13/2021
author: keferna ms.author: keferna ---
@@ -99,16 +99,16 @@ All the activities described on this page are covered by the Microsoft [publishe
Microsoft reserves the right to revoke and terminate Marketplace Rewards benefits to publishers who:
-* Engage in illegal activity using their marketplace listing.
-* Receive a purchase that is known or believed to be fraudulent.
-* Are de-listed from the commercial marketplace.
-* Use their offer to show marketing or other content that violates copyright or trademark laws.
-* Violate the policies of the [Azure sponsorship program](https://azure.microsoft.com/offers/ms-azr-0036p/), including, but not limited to, using the Azure sponsorship funding for their own internal operations or Bitcoin mining.
+- Engage in illegal activity using their marketplace listing.
+- Receive a purchase that is known or believed to be fraudulent.
+- Are de-listed from the commercial marketplace.
+- Use their offer to show marketing or other content that violates copyright or trademark laws.
+- Violate the policies of the [Azure sponsorship program](https://azure.microsoft.com/offers/ms-azr-0036p/), including, but not limited to, using the Azure sponsorship funding for their own internal operations or Bitcoin mining.
Microsoft reserves the right to revoke and terminate Marketplace Rewards when:
-* The customer making the purchase did so accidentally and wishes to cancel the purchase.
-* The customer cancels before using the partnerΓÇÖs product.
+- The customer making the purchase did so accidentally and wishes to cancel the purchase.
+- The customer cancels before using the partnerΓÇÖs product.
### Offer availability
marketplace https://docs.microsoft.com/en-us/azure/marketplace/support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/support.md
@@ -6,7 +6,7 @@ ms.subservice: partnercenter-marketplace-publisher
ms.topic: conceptual author: navits09 ms.author: navits
-ms.date: 09/18/2020
+ms.date: 01/14/2020
--- # Support for the commercial marketplace program in Partner Center
@@ -14,6 +14,7 @@ ms.date: 09/18/2020
Microsoft provides support for a wide variety of products and services. Finding the right support team is important to ensure an appropriate and timely response. Consider the following scenarios, which should help you route your query to the appropriate team: - If you're a publisher and have a question from a customer, ask your customer to request support using the support links in theΓÇ»[Azure portal](https://portal.azure.com/).
+- If youΓÇÖre a publisher and have detected a security issue with an application running on Azure, see [How to log a security event support ticket](/azure/security/fundamentals/event-support-ticket). Publishers must report suspected security events, including security incidents and vulnerabilities of their Azure Marketplace software and service offerings, at the earliest opportunity.
- If you're a publisher and have a question relating to your app or service, review the following support options. ## Support options for publishers
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md
@@ -19,10 +19,6 @@ ms.author: vinigam
You can migrate tests from Network Performance Monitor (NPM) to new, improved Connection Monitor with a single click and with zero downtime. To learn more about the benefits, see [Connection Monitor](./connection-monitor-overview.md).
->[!NOTE]
-> Only tests from Service Connectivity Monitor can be migrated to Connection Monitor.
->
- ## Key points to note The migration helps produce the following results:
@@ -47,7 +43,7 @@ To migrate the tests from Network Performance Monitor to Connection Monitor, do
:::image type="content" source="./media/connection-monitor-2-preview/migrate-npm-to-cm-preview.png" alt-text="Migrate tests from Network Performance Monitor to Connection Monitor" lightbox="./media/connection-monitor-2-preview/migrate-npm-to-cm-preview.png":::
-1. In the drop-down lists, select your subscription and workspace, and then select the NPM feature you want to migrate. Currently, you can migrate tests only from Service Connectivity Monitor.
+1. In the drop-down lists, select your subscription and workspace, and then select the NPM feature you want to migrate.
1. Select **Import** to migrate the tests. After the migration begins, the following changes take place:
@@ -69,5 +65,5 @@ After the migration, be sure to:
## Next steps To learn more about Connection Monitor, see:
-* [Migrate from Connection Monitor to Connection Monitor](./migrate-to-connection-monitor-from-connection-monitor-classic.md)
-* [Create Connection Monitor by using the Azure portal](./connection-monitor-create-using-portal.md)
\ No newline at end of file
+* [Migrate from Connection Monitor (classic) to Connection Monitor](./migrate-to-connection-monitor-from-connection-monitor-classic.md)
+* [Create Connection Monitor by using the Azure portal](./connection-monitor-create-using-portal.md)
networking https://docs.microsoft.com/en-us/azure/networking/edge-zones-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/edge-zones-overview.md
@@ -2,12 +2,12 @@
title: About Azure Edge Zone Preview description: 'Learn about edge computing offerings from Microsoft: Azure Edge Zone.' services: vnf-manager
-author: ganesr
+author: cherylmc
ms.service: vnf-manager ms.topic: article
-ms.date: 07/07/2020
-ms.author: ganesr
+ms.date: 01/13/2021
+ms.author: cherylmc
---
@@ -35,7 +35,7 @@ There are three types of Azure Edge Zones:
![Azure Edge Zones](./media/edge-zones-overview/edge-zones.png "Azure Edge Zones")
-Azure Edge Zones are small-footprint extensions of Azure placed in population centers that are far away from Azure regions. Azure Edge Zones support VMs, containers, and a selected set of Azure services that let you run latency-sensitive and throughput-intensive applications close to end users. Azure Edge Zones are part of the Microsoft global network. They provide secure, reliable, high-bandwidth connectivity between applications that run at the edge zone close to the user. And they offer the full set of Azure services running within Azure regions. Azure Edge Zones are owned and operated by Microsoft. You can use the same set of Azure tools and the same portal to manage and deploy services into Edge Zones.
+Azure Edge Zones are small-footprint extensions of Azure placed in population centers that are far away from Azure regions. Azure Edge Zones support VMs, containers, and a selected set of Azure services that let you run latency-sensitive and throughput-intensive applications close to end users. Azure Edge Zones are part of the Microsoft global network. They provide secure, reliable, high-bandwidth connectivity between applications that run at the edge zone close to the user. Azure Edge Zones are owned and operated by Microsoft. You can use the same set of Azure tools and the same portal to manage and deploy services into Edge Zones.
Typical use cases include:
@@ -58,7 +58,7 @@ Azure Edge Zones will be available in the following metro areas:
Azure Edge Zones with Carrier are small-footprint extensions of Azure that are placed in mobile operators' datacenters in population centers. Azure Edge Zone with Carrier infrastructure is placed one hop away from the mobile operator's 5G network. This placement offers latency of less than 10 milliseconds to applications from mobile devices.
-Azure Edge Zones with Carrier are deployed in mobile operators' datacenters and connected to the Microsoft global network. They provide secure, reliable, high-bandwidth connectivity between applications that run close to the user. And they offer the full set of Azure services running within Azure regions. Developers can use the same set of familiar tools to build and deploy services into the Edge Zones.
+Azure Edge Zones with Carrier are deployed in mobile operators' datacenters and connected to the Microsoft global network. They provide secure, reliable, high-bandwidth connectivity between applications that run close to the user. Developers can use the same set of familiar tools to build and deploy services into the Edge Zones.
Typical use cases include:
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-hyperscale-configuration-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-configuration-options.md
@@ -6,7 +6,7 @@ ms.author: jonels
ms.service: postgresql ms.subservice: hyperscale-citus ms.topic: conceptual
-ms.date: 7/1/2020
+ms.date: 1/12/2021
--- # Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) configuration options
@@ -140,6 +140,13 @@ Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
available storage options and IOPS calculation [above](#compute-and-storage) for node and cluster sizes.
+### Database creation
+
+The Azure portal provides credentials to connect to exactly one database per
+Hyperscale (Citus) server group, the `citus` database. Creating another
+database is currently not allowed, and the CREATE DATABASE command will fail
+with an error.
+ ## Pricing For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/postgresql/).
postgresql https://docs.microsoft.com/en-us/azure/postgresql/howto-hyperscale-useful-diagnostic-queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-useful-diagnostic-queries.md
@@ -294,6 +294,37 @@ Example output:
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ ```
+## Cache hit rate
+
+Most applications typically access a small fraction of their total data at
+once. PostgreSQL keeps frequently accessed data in memory to avoid slow reads
+from disk. You can see statistics about it in the
+[pg_statio_user_tables](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STATIO-ALL-TABLES-VIEW)
+view.
+
+An important measurement is what percentage of data comes from the memory cache
+vs the disk in your workload:
+
+``` postgresql
+SELECT
+ sum(heap_blks_read) AS heap_read,
+ sum(heap_blks_hit) AS heap_hit,
+ sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS ratio
+FROM
+ pg_statio_user_tables;
+```
+
+Example output:
+
+```
+ heap_read | heap_hit | ratio
+-----------+----------+------------------------
+ 1 | 132 | 0.99248120300751879699
+```
+
+If you find yourself with a ratio significantly lower than 99%, then you likely
+want to consider increasing the cache available to your database.
+ ## Next steps * Learn about other [system tables](reference-hyperscale-metadata.md)
remote-rendering https://docs.microsoft.com/en-us/azure/remote-rendering/quickstarts/deploy-to-hololens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/quickstarts/deploy-to-hololens.md
@@ -43,7 +43,7 @@ Make sure your credentials are saved properly with the scene and you can connect
1. For the project 'Quickstart', go to *Properties > Debugging* 1. Make sure the configuration *Release* is active 1. Set *Debugger to Launch* to **Remote Machine**
- 1. Change *Machine Name* to the **IP of your HoleLens**
+ 1. Change *Machine Name* to the **IP of your HoloLens**
## Launch the sample project
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/transfer-subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/transfer-subscription.md
@@ -203,7 +203,7 @@ Managed identities do not get updated when a subscription is transferred to anot
| `alternativeNames` property does not include `isExplicit` | System-assigned | | `alternativeNames` property includes `isExplicit=True` | User-assigned |
- You can also use [az identity list](/cli/azure/identity#az_identity_list) to just list user-assigned managed identities. For more information, see [Create, list or delete a user-assigned managed identity using the Azure CLI](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md).
+ You can also use [az identity list](/cli/azure/identity#az_identity_list) to just list user-assigned managed identities. For more information, see [Create, list, or delete a user-assigned managed identity using the Azure CLI](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md).
```azurecli az identity list
@@ -334,7 +334,7 @@ In this step, you transfer the subscription from the source directory to the tar
| --- | --- | | Virtual machines | [Configure managed identities for Azure resources on an Azure VM using Azure CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) | | Virtual machine scale sets | [Configure managed identities for Azure resources on a virtual machine scale set using Azure CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss.md#user-assigned-managed-identity) |
- | Other services | [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md)<br/>[Create, list or delete a user-assigned managed identity using the Azure CLI](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md) |
+ | Other services | [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md)<br/>[Create, list, or delete a user-assigned managed identity using the Azure CLI](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md) |
1. Use [az role assignment create](/cli/azure/role/assignment#az_role_assignment_create) to create the role assignments for user-assigned managed identities. For more information, see [Assign a managed identity access to a resource using Azure CLI](../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-quickstart-blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-quickstart-blob.md
@@ -18,7 +18,7 @@ In this quickstart, you'll combine services and data in the Azure cloud to creat
## Prerequisites
-Before you begin, create the following services:
+Before you begin, have the following prerequisites in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
search https://docs.microsoft.com/en-us/azure/search/search-explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-explorer.md
@@ -8,36 +8,36 @@ author: HeidiSteen
ms.author: heidist ms.service: cognitive-search ms.topic: quickstart
-ms.date: 09/25/2020
+ms.date: 01/12/2021
--- # Quickstart: Use Search explorer to run queries in the portal **Search explorer** is a built-in query tool used for running queries against a search index in Azure Cognitive Search. This tool makes it easy to learn query syntax, test a query or filter expression, or confirm data refresh by checking whether new content exists in the index.
-This quickstart uses an existing index to demonstrate Search explorer. Requests are formulated using the [Search REST API](/rest/api/searchservice/), with responses returned as JSON documents.
+This quickstart uses an existing index to demonstrate Search explorer. Requests are formulated using the [Search REST API](/rest/api/searchservice/search-documents), with responses returned as verbose JSON documents.
## Prerequisites
-Before you begin, you must have the following:
+Before you begin, have the following prerequisites in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). + An Azure Cognitive Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
-+ The *realestate-us-sample-index* is used for this quickstart. Use the [**Import data**](search-import-data-portal.md) wizard to create the index. In the first step, when asked for the data source, choose **Samples** and then select the **realestate-us-sample** data source. Accept all of the wizard defaults to create the index.
++ The *realestate-us-sample-index* is used for this quickstart. Use the [Quickstart: Create an index](search-import-data-portal.md) to create the index using default values. A built-in sample data source hosted by Microsoft (**realestate-us-sample**) provides the data. ## Start Search explorer
-1. In the [Azure portal](https://portal.azure.com), open the search service page from the dashboard or [find your service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
+1. In the [Azure portal](https://portal.azure.com), open the search overview page from the dashboard or [find your service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
1. Open Search explorer from the command bar:
- :::image type="content" source="media/search-explorer/search-explorer-cmd2.png" alt-text="Search explorer command in portal" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-cmd2.png" alt-text="Search explorer command in portal" border="true":::
Or use the embedded **Search explorer** tab on an open index:
- :::image type="content" source="media/search-explorer/search-explorer-tab.png" alt-text="Search explorer tab" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-tab.png" alt-text="Search explorer tab" border="true":::
## Unspecified query
@@ -51,7 +51,7 @@ Equivalent syntax for an empty search is `*` or `search=*`.
**Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-empty.png" alt-text="Unqualified or empty query example" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-empty.png" alt-text="Unqualified or empty query example" border="true":::
## Free text search
@@ -67,11 +67,11 @@ Notice that when you provide search criteria, such as query terms or expressions
You can use Ctrl-F to search within results for specific terms of interest.
- :::image type="content" source="media/search-explorer/search-explorer-example-freetext.png" alt-text="Free text query example" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-freetext.png" alt-text="Free text query example" border="true":::
## Count of matching documents
-Add **$count=true** to get the number of matches found in an index. On an empty search, count is the total number of documents in the index. On a qualified search, it's the number of documents matching the query input.
+Add **$count=true** to get the number of matches found in an index. On an empty search, count is the total number of documents in the index. On a qualified search, it's the number of documents matching the query input. Recall that the service returns the top 50 matches by default, so you might have more matches in the index than what's included in the results.
```http $count=true
@@ -79,7 +79,7 @@ Add **$count=true** to get the number of matches found in an index. On an empty
**Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-count.png" alt-text="Count of matching documents in index" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-count.png" alt-text="Count of matching documents in index" border="true":::
## Limit fields in search results
@@ -91,11 +91,13 @@ Add [**$select**](search-query-odata-select.md) to limit results to the explicit
**Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-selectfield.png" alt-text="Restrict fields in search results" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-selectfield.png" alt-text="Restrict fields in search results" border="true":::
## Return next batch of results
-Azure Cognitive Search returns the top 50 matches based on the search rank. To get the next set of matching documents, append **$top=100,&$skip=50** to increase the result set to 100 documents (default is 50, maximum is 1000), skipping the first 50 documents. Recall that you need to provide search criteria, such as a query term or expression, to get ranked results. Notice that search scores decrease the deeper you reach into search results.
+Azure Cognitive Search returns the top 50 matches based on the search rank. To get the next set of matching documents, append **$top=100,&$skip=50** to increase the result set to 100 documents (default is 50, maximum is 1000), skipping the first 50 documents. You can check the document key (listingID) to identify a document.
+
+Recall that you need to provide search criteria, such as a query term or expression, to get ranked results. Notice that search scores decrease the deeper you reach into search results.
```http search=seattle condo&$select=listingId,beds,baths,description,street,city,price&$count=true&$top=100&$skip=50
@@ -103,7 +105,7 @@ Azure Cognitive Search returns the top 50 matches based on the search rank. To g
**Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-topskip.png" alt-text="Return next batch of search results" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-topskip.png" alt-text="Return next batch of search results" border="true":::
## Filter expressions (greater than, less than, equal to)
@@ -115,7 +117,7 @@ Use the [**$filter**](search-query-odata-filter.md) parameter when you want to s
**Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-filter.png" alt-text="Filter by criteria" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-filter.png" alt-text="Filter by criteria" border="true":::
## Order-by expressions
@@ -127,7 +129,7 @@ Add [**$orderby**](search-query-odata-orderby.md) to sort results by another fie
**Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-ordery.png" alt-text="Change the sort order" border="false":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-ordery.png" alt-text="Change the sort order" border="true":::
Both **$filter** and **$orderby** expressions are OData constructions. For more information, see [Filter OData syntax](/rest/api/searchservice/odata-expression-syntax-for-azure-search).
@@ -137,13 +139,13 @@ Both **$filter** and **$orderby** expressions are OData constructions. For more
In this quickstart, you used **Search explorer** to query an index using the REST API.
-+ Results are returned as verbose JSON documents so that you can view document construction and content, in entirety. You can use query expressions, shown in the examples, to limit which fields are returned.
++ Results are returned as verbose JSON documents so that you can view document construction and content, in entirety. The **$select** parameter in a query expression can limit which fields are returned. + Documents are composed of all fields marked as **Retrievable** in the index. To view index attributes in the portal, click *realestate-us-sample* in the **Indexes** list on the search overview page. + Free-form queries, similar to what you might enter in a commercial web browser, are useful for testing an end-user experience. For example, assuming the built-in realestate sample index, you could enter "Seattle apartments lake washington", and then you can use Ctrl-F to find terms within the search results.
-+ Query and filter expressions are articulated in a syntax supported by Azure Cognitive Search. The default is a [simple syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search), but you can optionally use [full Lucene](/rest/api/searchservice/lucene-query-syntax-in-azure-search) for more powerful queries. [Filter expressions](/rest/api/searchservice/odata-expression-syntax-for-azure-search) are an OData syntax.
++ Query and filter expressions are articulated in a syntax implemented by Azure Cognitive Search. The default is a [simple syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search), but you can optionally use [full Lucene](/rest/api/searchservice/lucene-query-syntax-in-azure-search) for more powerful queries. [Filter expressions](/rest/api/searchservice/odata-expression-syntax-for-azure-search) are an OData syntax. ## Clean up resources
@@ -155,7 +157,7 @@ If you are using a free service, remember that you are limited to three indexes,
## Next steps
-To learn more about query structures and syntax, use Postman or an equivalent tool to create query expressions that leverage more parts of the API. The [Search REST API](/rest/api/searchservice/) is especially helpful for learning and exploration.
+To learn more about query structures and syntax, use Postman or an equivalent tool to create query expressions that leverage more parts of the API. The [Search REST API](/rest/api/searchservice/search-documents) is especially helpful for learning and exploration.
> [!div class="nextstepaction"]
-> [Create a basic query in Postman](search-query-simple-examples.md)
\ No newline at end of file
+> [Create a basic query in Postman](search-get-started-rest.md)
\ No newline at end of file
search https://docs.microsoft.com/en-us/azure/search/search-get-started-dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-get-started-dotnet.md
@@ -392,7 +392,7 @@ The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
response = srchclient.Search<Hotel>("*", options); WriteDocuments(response);
- ```
+ ```
1. In the second query, search on a term, add a filter that selects documents where Rating is greater than 4, and then sort by Rating in descending order. Filter is a boolean expression that is evaluated over [IsFilterable](/dotnet/api/azure.search.documents.indexes.models.searchfield.isfilterable) fields in an index. Filter queries either include or exclude values. As such, there is no relevance score associated with a filter query.
@@ -511,4 +511,4 @@ In this C# quickstart, you worked through a set of tasks to create an index, loa
Want to optimize and save on your cloud spending? > [!div class="nextstepaction"]
-> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
\ No newline at end of file
+> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
search https://docs.microsoft.com/en-us/azure/search/search-indexer-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-overview.md
@@ -8,29 +8,39 @@ author: HeidiSteen
ms.author: heidist ms.service: cognitive-search ms.topic: conceptual
-ms.date: 09/25/2020
+ms.date: 01/11/2020
ms.custom: fasttrack-edit --- # Indexers in Azure Cognitive Search
-An *indexer* in Azure Cognitive Search is a crawler that extracts searchable data and metadata from an external Azure data source and populates an index based on field-to-field mappings between the index and your data source. This approach is sometimes referred to as a 'pull model' because the service pulls data in without you having to write any code that adds data to an index.
+An *indexer* in Azure Cognitive Search is a crawler that extracts searchable data and metadata from an external Azure data source and populates a search index using field-to-field mappings between source data and your index. This approach is sometimes referred to as a 'pull model' because the service pulls data in without you having to write any code that adds data to an index.
-Indexers are based on data source types or platforms, with individual indexers for SQL Server on Azure, Cosmos DB, Azure Table Storage and Blob Storage. Blob storage indexers have additional properties specific to blob content types.
-
-You can use an indexer as the sole means for data ingestion, or use a combination of techniques that include the use of an indexer for loading just some of the fields in your index.
+Indexers are Azure-only, with individual indexers for Azure SQL, Azure Cosmos DB, Azure Table Storage and Blob Storage. When you configure an indexer, you'll specify a data source (origin), as well as an index (destination). Several data sources, such as Blob storage indexers, have additional properties specific to that content type.
You can run indexers on demand or on a recurring data refresh schedule that runs as often as every five minutes. More frequent updates require a push model that simultaneously updates data in both Azure Cognitive Search and your external data source.
+## Usage scenarios
+
+You can use an indexer as the sole means for data ingestion, or use a combination of techniques that include loading just some of the fields in your index, optionally transforming or enriching content along the way. The following table summarizes the main scenarios.
+
+| Scenario |Strategy |
+|----------|---------|
+| Single source | This pattern is the simplest: one data source is the sole content provider for a search index. From the source, you'll identify one field containing unique values to serve as the document key in the search index. The unique value will be used as an identifier. All other source fields are mapped implicitly or explicitly to corresponding fields in an index. </br></br>An important takeaway is that the value of a document key originates from source data. A search service does not generate key values. On subsequent runs, incoming documents with new keys are added, while incoming documents with existing keys are either merged or overwritten, depending on whether index fields are null or populated. |
+| Multiple sources| An index can accept content from multiple sources, where each run brings new content from a different source. </br></br>One outcome might be an index that gains documents after each indexer run, with entire documents created in full from each source. The challenge for this scenario lies in designing an index schema that works for all incoming data, and a document key that is uniform in the search index. For example, if the values that uniquely identify a document are metadata_storage_path in a blob container and a primary key in a SQL table, you can imagine that one or both sources must be amended to provide key values in a common format, regardless of content origin. For this scenario, you should expect to perform some level of pre-processing to homogenize the data so that it can be pulled into a single index.</br></br>An alternative outcome might be search documents that are partially populated on the first run, and then further populated by subsequent runs to bring in values from other sources. The challenge of this pattern is making sure that each indexing run is targeting the same document. Merging fields into an existing document requires a match on the document key. For a demonstration of this scenario, see [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). |
+| Content transformation | Cognitive Search supports optional [AI enrichment](cognitive-search-concept-intro.md) behaviors that add image analysis and natural language processing to create new searchable content and structure. AI enrichment is defined by a [skillset](cognitive-search-working-with-skillsets.md), attached to an indexer. To perform AI enrichment, the indexer still needs an index and data source, but in this scenario, adds skillset processing to indexer execution. |
+ ## Approaches for creating and managing indexers You can create and manage indexers using these approaches:
-* [Portal > Import Data Wizard](search-import-data-portal.md)
-* [Service REST API](/rest/api/searchservice/Indexer-operations)
-* [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
++ [Portal > Import Data Wizard](search-import-data-portal.md)++ [Service REST API](/rest/api/searchservice/Indexer-operations)++ [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)+
+If you're using an SDK, create a [SearchIndexerClient](/dotnet/api/azure.search.documents.indexes.searchindexerclient) to work with indexers, data sources, and skillsets. The above link is for the .NET SDK, but all SDKs provide a SearchIndexerClient and similar APIs.
-Initially, a new indexer is announced as a preview feature. Preview features are introduced in APIs (REST and .NET) and then integrated into the portal after graduating to general availability. If you're evaluating a new indexer, you should plan on writing code.
+Initially, new data sources are announced as preview features and are REST-only. After graduating to general availability, full support is built into the portal and into the various SDKs, each of which are on their own release schedules.
## Permissions
@@ -42,15 +52,15 @@ All operations related to indexers, including GET requests for status or definit
Indexers crawl data stores on Azure.
-* [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md)
-* [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md) (in preview)
-* [Azure Table Storage](search-howto-indexing-azure-tables.md)
-* [Azure Cosmos DB](search-howto-index-cosmosdb.md)
-* [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
-* [SQL Managed Instance](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md)
-* [SQL Server on Azure Virtual Machines](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md)
++ [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md)++ [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md) (in preview)++ [Azure Table Storage](search-howto-indexing-azure-tables.md)++ [Azure Cosmos DB](search-howto-index-cosmosdb.md)++ [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)++ [SQL Managed Instance](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md)++ [SQL Server on Azure Virtual Machines](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md)
-## Indexer Stages
+## Stages of indexing
On an initial run, when the index is empty, an indexer will read in all of the data provided in the table or container. On subsequent runs, the indexer can usually detect and retrieve just the data that has changed. For blob data, change detection is automatic. For other data sources like Azure SQL or Cosmos DB, change detection must be enabled.
@@ -64,9 +74,9 @@ Document cracking is the process of opening files and extracting content. Depend
Examples:
-* When the document is a record in an [Azure SQL data source](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), the indexer will extract each of the fields for the record.
-* When the document is a PDF file in an [Azure Blob Storage data source](search-howto-indexing-azure-blob-storage.md), the indexer will extract the text, images and metadata for the file.
-* When the document is a record in a [Cosmos DB data source](search-howto-index-cosmosdb.md), the indexer will extract the fields and subfields from the Cosmos DB document.
++ When the document is a record in an [Azure SQL data source](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), the indexer will extract each of the fields for the record.++ When the document is a PDF file in an [Azure Blob Storage data source](search-howto-indexing-azure-blob-storage.md), the indexer will extract text, images, and metadata.++ When the document is a record in a [Cosmos DB data source](search-howto-index-cosmosdb.md), the indexer will extract the fields and subfields from the Cosmos DB document. ### Stage 2: Field mappings
@@ -91,18 +101,21 @@ The next image shows a sample indexer [debug session](cognitive-search-debug-ses
Indexers can offer features that are unique to the data source. In this respect, some aspects of indexer or data source configuration will vary by indexer type. However, all indexers share the same basic composition and requirements. Steps that are common to all indexers are covered below. ### Step 1: Create a data source+ An indexer obtains data source connection from a *data source* object. The data source definition provides a connection string and possibly credentials. Call the [Create Datasource](/rest/api/searchservice/create-data-source) REST API or [SearchIndexerDataSourceConnection class](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourceconnection) to create the resource. Data sources are configured and managed independently of the indexers that use them, which means a data source can be used by multiple indexers to load more than one index at a time. ### Step 2: Create an index+ An indexer will automate some tasks related to data ingestion, but creating an index is generally not one of them. As a prerequisite, you must have a predefined index with fields that match those in your external data source. Fields need to match by name and data type. For more information about structuring an index, see [Create an Index (Azure Cognitive Search REST API)](/rest/api/searchservice/Create-Index) or [SearchIndex class](/dotnet/api/azure.search.documents.indexes.models.searchindex). For help with field associations, see [Field mappings in Azure Cognitive Search indexers](search-indexer-field-mappings.md). > [!Tip] > Although indexers cannot generate an index for you, the **Import data** wizard in the portal can help. In most cases, the wizard can infer an index schema from existing metadata in the source, presenting a preliminary index schema which you can edit in-line while the wizard is active. Once the index is created on the service, further edits in the portal are mostly limited to adding new fields. Consider the wizard for creating, but not revising, an index. For hands-on learning, step through the [portal walkthrough](search-get-started-portal.md). ### Step 3: Create and schedule the indexer
-The indexer definition is a construct that brings together all of the elements related to data ingestion. Required elements include a data source and index. Optional elements include a schedule and field mappings. Field mapping are only optional if source fields and index fields clearly correspond. For more information about structuring an indexer, see [Create Indexer (Azure Cognitive Search REST API)](/rest/api/searchservice/Create-Indexer).
+
+The indexer definition is a construct that brings together all of the elements related to data ingestion. Required elements include a data source and index. Optional elements include a schedule and field mappings. Field mappings are only optional if source fields and index fields clearly correspond. For more information about structuring an indexer, see [Create Indexer (Azure Cognitive Search REST API)](/rest/api/searchservice/Create-Indexer).
<a id="RunIndexer"></a>
@@ -116,9 +129,9 @@ api-key: [Search service admin key]
``` > [!NOTE]
-> When Run API returns successfully, the indexer invocation has been scheduled, but the actual processing happens asynchronously.
+> When Run API returns a success code, the indexer invocation has been scheduled, but the actual processing happens asynchronously.
-You can monitor the indexer status in the portal or through Get Indexer Status API.
+You can monitor the indexer status in the portal or through [Get Indexer Status API](/rest/api/searchservice/get-indexer-status).
<a name="GetIndexerStatus"></a>
@@ -164,11 +177,12 @@ The response contains overall indexer status, the last (or in-progress) indexer
Execution history contains up to the 50 most recent completed executions, which are sorted in reverse chronological order (so the latest execution comes first in the response). ## Next steps+ Now that you have the basic idea, the next step is to review requirements and tasks specific to each data source type.
-* [Azure SQL Database, SQL Managed Instance, or SQL Server on an Azure virtual machine](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
-* [Azure Cosmos DB](search-howto-index-cosmosdb.md)
-* [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md)
-* [Azure Table Storage](search-howto-indexing-azure-tables.md)
-* [Indexing CSV blobs using the Azure Cognitive Search Blob indexer](search-howto-index-csv-blobs.md)
-* [Indexing JSON blobs with Azure Cognitive Search Blob indexer](search-howto-index-json-blobs.md)
\ No newline at end of file++ [Azure SQL Database, SQL Managed Instance, or SQL Server on an Azure virtual machine](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)++ [Azure Cosmos DB](search-howto-index-cosmosdb.md)++ [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md)++ [Azure Table Storage](search-howto-indexing-azure-tables.md)++ [Indexing CSV blobs using the Azure Cognitive Search Blob indexer](search-howto-index-csv-blobs.md)++ [Indexing JSON blobs with Azure Cognitive Search Blob indexer](search-howto-index-json-blobs.md)\ No newline at end of file
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-services.md
@@ -127,7 +127,7 @@ For information about when recommendations are generated for each of these prote
|[Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md)|-|-| |[Azure Defender for DNS](defender-for-dns-introduction.md)|-|-| |[Azure Defender for Kubernetes](defender-for-kubernetes-introduction.md)|Γ£ö|Γ£ö|
-|[Azure Defender for container registries](defender-for-container-registries-introduction.md)|Γ£ö (2)|-|
+|[Azure Defender for container registries](defender-for-container-registries-introduction.md)|Γ£ö (2)|Γ£ö (2)|
||| (1) Requires **Azure Defender for servers**
security-center https://docs.microsoft.com/en-us/azure/security-center/upcoming-changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
@@ -27,9 +27,24 @@ If you're looking for the latest release notes, you'll find them in the [What's
## Planned changes
+- [Enhancements to SQL data classification recommendation](#enhancements-to-sql-data-classification-recommendation)
- ["Not applicable" resources to be reported as "Compliant" in Azure Policy assessments](#not-applicable-resources-to-be-reported-as-compliant-in-azure-policy-assessments) - [35 preview recommendations added to increase coverage of Azure Security Benchmark](#35-preview-recommendations-being-added-to-increase-coverage-of-azure-security-benchmark) ++
+### Enhancements to SQL data classification recommendation
+
+**Estimated date for change:** Q2 2021
+
+The current version of the recommendation **Sensitive data in your SQL databases should be classified** in the **Apply data classification** security control will be deprecated and replaced with a new version that's better aligned with Microsoft's data classification strategy. As a result:
+
+- The recommendation will no longer affect your secure score
+- The security control ("Apply data classification") will no longer affect your secure score
+- The recommendation's ID will also change (currently b0df6f56-862d-4730-8597-38c0fd4ebd59)
+++ ### "Not applicable" resources to be reported as "Compliant" in Azure Policy assessments **Estimated date for change:** January 2021
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/duplicate-detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/duplicate-detection.md
@@ -2,27 +2,32 @@
title: Azure Service Bus duplicate message detection | Microsoft Docs description: This article explains how you can detect duplicates in Azure Service Bus messages. The duplicate message can be ignored and dropped. ms.topic: article
-ms.date: 06/23/2020
+ms.date: 01/13/2021
--- # Duplicate detection If an application fails due to a fatal error immediately after it sends a message, and the restarted application instance erroneously believes that the prior message delivery did not occur, a subsequent send causes the same message to appear in the system twice.
-It is also possible for an error at the client or network level to occur a moment earlier, and for a sent message to be committed into the queue, with the acknowledgment not successfully returned to the client. This scenario leaves the client in doubt about the outcome of the send operation.
+It's also possible for an error at the client or network level to occur a moment earlier, and for a sent message to be committed into the queue, with the acknowledgment not successfully returned to the client. This scenario leaves the client in doubt about the outcome of the send operation.
Duplicate detection takes the doubt out of these situations by enabling the sender resend the same message, and the queue or topic discards any duplicate copies.
+## How it works?
Enabling duplicate detection helps keep track of the application-controlled *MessageId* of all messages sent into a queue or topic during a specified time window. If any new message is sent with *MessageId* that was logged during the time window, the message is reported as accepted (the send operation succeeds), but the newly sent message is instantly ignored and dropped. No other parts of the message other than the *MessageId* are considered. Application control of the identifier is essential, because only that allows the application to tie the *MessageId* to a business process context from which it can be predictably reconstructed when a failure occurs. For a business process in which multiple messages are sent in the course of handling some application context, the *MessageId* may be a composite of the application-level context identifier, such as a purchase order number, and the subject of the message, for example, **12345.2017/payment**.
-The *MessageId* can always be some GUID, but anchoring the identifier to the business process yields predictable repeatability, which is desired for leveraging the duplicate detection feature effectively.
+The *MessageId* can always be some GUID, but anchoring the identifier to the business process yields predictable repeatability, which is desired for using the duplicate detection feature effectively.
+
+> [!IMPORTANT]
+>- When **partitioning** is **enabled**, `MessageId+PartitionKey` is used to determine uniqueness. When sessions are enabled, partition key and session ID must be the same.
+>- When **partitioning** is **disabled** (default), only `MessageId` is used to determine uniqueness.
+>- For information about SessionId, PartitionKey, and MessageId, see [Use of partition keys](service-bus-partitioning.md#use-of-partition-keys).
+>- The [premier tier](service-bus-premium-messaging.md) doesn't support partitioning, so we recommend that you use unique message IDs in your applications and not rely on partition keys for duplicate detection.
-> [!NOTE]
-> If the duplicate detection is enabled and session ID or partition key are not set, the message ID is used as the partition key. If the message ID is also not set, .NET and AMQP libraries automatically generate a message ID for the message. For more information, see [Use of partition keys](service-bus-partitioning.md#use-of-partition-keys).
## Enable duplicate detection
@@ -53,7 +58,7 @@ To learn more about Service Bus messaging, see the following topics:
* [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md) * [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
-In scenarios where client code is unable to resubmit a message with the same *MessageId* as before, it is important to design messages which can be safely re-processed. This [blog post about idempotence](https://particular.net/blog/what-does-idempotent-mean) describes various techniques for how to do that.
+In scenarios where client code is unable to resubmit a message with the same *MessageId* as before, it is important to design messages that can be safely reprocessed. This [blog post about idempotence](https://particular.net/blog/what-does-idempotent-mean) describes various techniques for how to do that.
[1]: ./media/duplicate-detection/create-queue.png [2]: ./media/duplicate-detection/queue-prop.png
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-nodejs-how-to-use-queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-nodejs-how-to-use-queues.md
@@ -69,7 +69,7 @@ The following sample code shows you how to send a message to a queue.
// create a batch object let batch = await sender.createMessageBatch(); for (let i = 0; i < messages.length; i++) {
- // for each message in the arry
+ // for each message in the array
// try to add the message to the batch if (!batch.tryAddMessage(messages[i])) {
@@ -78,7 +78,7 @@ The following sample code shows you how to send a message to a queue.
await sender.sendMessages(batch); // then, create a new batch
- batch = await sender.createBatch();
+ batch = await sender.createMessageBatch();
// now, add the message failed to be added to the previous batch to this batch if (!batch.tryAddMessage(messages[i])) {
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-nodejs-how-to-use-topics-subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-nodejs-how-to-use-topics-subscriptions.md
@@ -75,7 +75,7 @@ The following sample code shows you how to send a batch of messages to a Service
await sender.sendMessages(batch); // then, create a new batch
- batch = await sender.createBatch();
+ batch = await sender.createMessageBatch();
// now, add the message failed to be added to the previous batch to this batch if (!batch.tryAddMessage(messages[i])) {
@@ -203,4 +203,4 @@ See the following documentation and samples:
- [Azure Service Bus client library for Python](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/servicebus/service-bus/README.md) - [Samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/servicebus/service-bus/samples). The **javascript** folder has JavaScript samples and the **typescript** has TypeScript samples. -- [azure-servicebus reference documentation](/javascript/api/overview/azure/service-bus)\ No newline at end of file
+- [azure-servicebus reference documentation](/javascript/api/overview/azure/service-bus)
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-access-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-access-control.md
@@ -55,7 +55,7 @@ Both access ACLs and default ACLs have the same structure.
## Levels of permission
-The permissions on a container object are **Read**, **Write**, and **Execute**, and they can be used on files and directories as shown in the following table:
+The permissions on directories and files in a container, are **Read**, **Write**, and **Execute**, and they can be used on files and directories as shown in the following table:
| | File | Directory | |------------|-------------|----------|
@@ -64,7 +64,7 @@ The permissions on a container object are **Read**, **Write**, and **Execute**,
| **Execute (X)** | Does not mean anything in the context of Data Lake Storage Gen2 | Required to traverse the child items of a directory | > [!NOTE]
-> If you are granting permissions by using only ACLs (no Azure RBAC), then to grant a security principal read or write access to a file, you'll need to give the security principal **Execute** permissions to the container, and to each folder in the hierarchy of folders that lead to the file.
+> If you are granting permissions by using only ACLs (no Azure RBAC), then to grant a security principal read or write access to a file, you'll need to give the security principal **Execute** permissions to the root folder of the container, and to each folder in the hierarchy of folders that lead to the file.
### Short forms for permissions
storage https://docs.microsoft.com/en-us/azure/storage/blobs/manage-access-tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/manage-access-tier.md new file mode 100644
@@ -0,0 +1,68 @@
+---
+title: Manage the access tier of a blob in an Azure Storage account
+description: Learn how to change the tier of a blob in a GPv2 or Blob Storage account
+author: mhopkins-msft
+
+ms.author: mhopkins
+ms.date: 01/11/2021
+ms.service: storage
+ms.subservice: blobs
+ms.topic: how-to
+ms.reviewer: klaasl
+---
+
+# Manage the access tier of a blob in an Azure Storage account
+
+Each blob has a default access tier, either hot, cool, or archive. A blob takes on the default access tier of the Azure Storage account where it is created. Blob Storage and GPv2 accounts expose the **Access Tier** attribute at the account level. This attribute specifies the default access tier for any blob that doesn't have it explicitly set at the object level. For objects with the tier set at the object level, the account tier won't apply. The archive tier can be applied only at the object level. You can switch between access tiers at any time by following the steps below.
+
+## Change the tier of a blob in a GPv2 or Blob Storage account
+
+The following scenarios use the Azure portal or PowerShell:
+
+# [Portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the Azure portal, search for and select **All Resources**.
+
+1. Select your storage account.
+
+1. Select your container and then select your blob.
+
+1. In the **Blob properties**, select **Change tier**.
+
+1. Select the **Hot**, **Cool**, or **Archive** access tier. If your blob is currently in archive and you want to rehydrate to an online tier, you may also select a Rehydrate Priority of **Standard** or **High**.
+
+1. Select **Save** at the bottom.
+
+![Change blob tier in Azure portal](media/storage-tiers/blob-access-tier.png)
+
+# [PowerShell](#tab/powershell)
+
+The following PowerShell script can be used to change the blob tier. The `$rgName` variable must be initialized with your resource group name. The `$accountName` variable must be initialized with your storage account name. The `$containerName` variable must be initialized with your container name. The `$blobName` variable must be initialized with your blob name.
+
+```powershell
+#Initialize the following with your resource group, storage account, container, and blob names
+$rgName = ""
+$accountName = ""
+$containerName = ""
+$blobName == ""
+
+#Select the storage account and get the context
+$storageAccount = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName
+$ctx = $storageAccount.Context
+
+#Select the blob from a container
+$blob = Get-AzStorageBlob -Container $containerName -Blob $blobName -Context $ctx
+
+#Change the blobΓÇÖs access tier to archive
+$blob.ICloudBlob.SetStandardBlobTier("Archive")
+```
+
+---
+
+## Next steps
+
+- [How to manage the default account access tier of an Azure Storage account](../common/manage-account-default-access-tier.md)
+- [Learn about rehydrating blob data from the archive tier](storage-blob-rehydration.md)
+- [Check hot, cool, and archive pricing in Blob Storage and GPv2 accounts by region](https://azure.microsoft.com/pricing/details/storage/)
storage https://docs.microsoft.com/en-us/azure/storage/blobs/object-replication-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/object-replication-overview.md
@@ -7,7 +7,7 @@ author: tamram
ms.service: storage ms.topic: conceptual
-ms.date: 11/13/2020
+ms.date: 01/13/2021
ms.author: tamram ms.subservice: blobs ms.custom: devx-track-azurepowershell
@@ -86,6 +86,16 @@ You can also specify one or more filters as part of a replication rule to filter
The source and destination containers must both exist before you can specify them in a rule. After you create the replication policy, the destination container becomes read-only. Any attempts to write to the destination container fail with error code 409 (Conflict). However, you can call the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation on a blob in the destination container to move it to the archive tier. For more information about the archive tier, see [Azure Blob storage: hot, cool, and archive access tiers](storage-blob-storage-tiers.md#archive-access-tier).
+## Replication status
+
+You can check the replication status for a blob in the source account. For more information, see [Check the replication status of a blob](object-replication-configure.md#check-the-replication-status-of-a-blob).
+
+If the replication status for a blob in the source account indicates failure, then investigate the following possible causes:
+
+- Make sure that the object replication policy is configured on the destination account.
+- Verify that the destination container still exists.
+- If the source blob has been encrypted with a customer-provided key as part of a write operation, then object replication will fail. For more information about customer-provided keys, see [Provide an encryption key on a request to Blob storage](encryption-customer-provided-keys.md).
+ ## Billing Object replication incurs additional costs on read and write transactions against the source and destination accounts, as well as egress charges for the replication of data from the source account to the destination account and read charges to process change feed.
storage https://docs.microsoft.com/en-us/azure/storage/blobs/security-recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/security-recommendations.md
@@ -8,7 +8,7 @@ author: tamram
ms.service: storage ms.subservice: blobs ms.topic: conceptual
-ms.date: 01/12/2021
+ms.date: 01/13/2021
ms.author: tamram ms.custom: security-recommendations ---
@@ -31,7 +31,7 @@ Azure Security Center periodically analyzes the security state of your Azure res
| Turn on soft delete for containers | Soft delete for containers enables you to recover a container after it has been deleted. For more information on soft delete for containers, see [Soft delete for containers (preview)](./soft-delete-container-overview.md). | - | | Lock storage account to prevent accidental account deletion | You can lock an Azure Resource Manager resources, such as a subscription, resource group, or storage account, to prevent other users in your organization from accidentally deleting or modifying it. Locking a storage account does not prevent data within that account from being deleted. It only prevents the account itself from being deleted. For more information, see [Lock resources to prevent unexpected changes](../../azure-resource-manager/management/lock-resources.md). | Store business-critical data in immutable blobs | Configure legal holds and time-based retention policies to store blob data in a WORM (Write Once, Read Many) state. Blobs stored immutably can be read, but cannot be modified or deleted for the duration of the retention interval. For more information, see [Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md). | - |
-| Require secure transfer (HTTPS) to the storage account | ??? | - |
+| Require secure transfer (HTTPS) to the storage account | When you require secure transfer for a storage account, all requests to the storage account must be made over HTTPS. Any requests made over HTTP are rejected. Microsoft recommends that you always require secure transfer for all of your storage accounts. For more information, see [Require secure transfer to ensure secure connections](../common/storage-require-secure-transfer.md). | - |
| Limit shared access signature (SAS) tokens to HTTPS connections only | Requiring HTTPS when a client uses a SAS token to access blob data helps to minimize the risk of eavesdropping. For more information, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md). | - | ## Identity and access management
storage https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-rehydration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-rehydration.md
@@ -5,7 +5,7 @@ services: storage
author: mhopkins-msft ms.author: mhopkins
-ms.date: 04/08/2020
+ms.date: 01/08/2021
ms.service: storage ms.subservice: blobs ms.topic: conceptual
@@ -25,9 +25,13 @@ While a blob is in the archive access tier, it's considered offline and can't be
[!INCLUDE [storage-blob-rehydration](../../../includes/storage-blob-rehydrate-include.md)]
+## Monitor rehydration progress
+
+During rehydration, use the get blob properties operation to check the **Archive Status** attribute and confirm when the tier change is complete. The status reads "rehydrate-pending-to-hot" or "rehydrate-pending-to-cool" depending on the destination tier. Upon completion, the archive status property is removed, and the **Access Tier** blob property reflects the new hot or cool tier.
+ ## Copy an archived blob to an online tier
-If you don't want to rehydrate your archive blob, you can choose to do a [Copy Blob](/rest/api/storageservices/copy-blob) operation. Your original blob will remain unmodified in archive while a new blob is created in the online hot or cool tier for you to work on. In the Copy Blob operation, you may also set the optional *x-ms-rehydrate-priority* property to Standard or High to specify the priority at which you want your blob copy created.
+If you don't want to rehydrate your archive blob, you can choose to do a [Copy Blob](/rest/api/storageservices/copy-blob) operation. Your original blob will remain unmodified in archive while a new blob is created in the online hot or cool tier for you to work on. In the **Copy Blob** operation, you may also set the optional *x-ms-rehydrate-priority* property to Standard or High to specify the priority at which you want your blob copy created.
Copying a blob from archive can take hours to complete depending on the rehydrate priority selected. Behind the scenes, the **Copy Blob** operation reads your archive source blob to create a new online blob in the selected destination tier. The new blob may be visible when you list blobs but the data is not available until the read from the source archive blob is complete and data is written to the new online destination blob. The new blob is as an independent copy and any modification or deletion to it does not affect the source archive blob.
storage https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-storage-tiers.md
@@ -1,72 +1,88 @@
--- title: Access tiers for Azure Blob Storage - hot, cool, and archive
-description: Read about hot, cool, and archive access tiers for Azure Blob Storage. Review storage accounts that support tiering. Compare block blob storage options.
+description: Read about hot, cool, and archive access tiers for Azure Blob Storage. Review storage accounts that support tiering.
author: mhopkins-msft ms.author: mhopkins
-ms.date: 12/08/2020
+ms.date: 01/11/2021
ms.service: storage ms.subservice: blobs ms.topic: conceptual
-ms.reviewer: clausjor
+ms.reviewer: klaasl
--- # Access tiers for Azure Blob Storage - hot, cool, and archive
-Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include:
+Azure storage offers different access tiers, allowing you to store blob object data in the most cost-effective manner. Available access tiers include:
- **Hot** - Optimized for storing data that is accessed frequently. - **Cool** - Optimized for storing data that is infrequently accessed and stored for at least 30 days.-- **Archive** - Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).
+- **Archive** - Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements, on the order of hours.
The following considerations apply to the different access tiers: -- Only the hot and cool access tiers can be set at the account level. The archive access tier isn't available at the account level.-- Hot, cool, and archive tiers can be set at the blob level during upload or after upload.-- Data in the cool access tier can tolerate slightly lower availability, but still requires high durability, retrieval latency, and throughput characteristics similar to hot data. For cool data, a slightly lower availability service-level agreement (SLA) and higher access costs compared to hot data are acceptable trade-offs for lower storage costs.-- Archive storage stores data offline and offers the lowest storage costs but also the highest data rehydrate and access costs.
+- The access tier can be set on a blob during or after upload.
+- Only the hot and cool access tiers can be set at the account level. The archive access tier can only be set at the blob level.
+- Data in the cool access tier has slightly lower availability, but still has high durability, retrieval latency, and throughput characteristics similar to hot data. For cool data, slightly lower availability and higher access costs are acceptable trade-offs for lower overall storage costs compared to hot data. For more information, see [SLA for storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
+- Data in the archive access tier is stored offline. The archive tier offers the lowest storage costs but also the highest access costs and latency.
+- The hot and cool tiers support all redundancy options. The archive tier supports only LRS, GRS, and RA-GRS.
+- Data storage limits are set at the account level and not per access tier. You can choose to use all of your limit in one tier or across all three tiers.
Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it's helpful to organize your data based on attributes like frequency-of-access and planned retention period to optimize costs. Data stored in the cloud can be different based on how it's generated, processed, and accessed over its lifetime. Some data is actively accessed and modified throughout its lifetime. Some data is accessed frequently early in its lifetime, with access dropping drastically as the data ages. Some data remains idle in the cloud and is rarely, if ever, accessed after it's stored. Each of these data access scenarios benefits from a different access tier that is optimized for a particular access pattern. With hot, cool, and archive access tiers, Azure Blob Storage addresses this need for differentiated access tiers with separate pricing models.
+The following tools and client libraries all support blob-level tiering and archive storage.
+
+- Azure portal
+- PowerShell
+- Azure CLI tools
+- .NET client library
+- Java client library
+- Python client library
+- Node.js client library
+ [!INCLUDE [storage-multi-protocol-access-preview](../../../includes/storage-multi-protocol-access-preview.md)] ## Storage accounts that support tiering
-Object storage data tiering between hot, cool, and archive is only supported in Blob Storage and General Purpose v2 (GPv2) accounts. General Purpose v1 (GPv1) accounts don't support tiering. Customers can easily convert their existing GPv1 or Blob Storage accounts to GPv2 accounts through the Azure portal. GPv2 provides new pricing and features for blobs, files, and queues. Some features and prices cuts are only offered in GPv2 accounts. Evaluate using GPv2 accounts after comprehensively reviewing pricing. Some workloads can be more expensive on GPv2 than GPv1. For more information, see [Azure storage account overview](../common/storage-account-overview.md).
+Object storage data tiering between hot, cool, and archive is supported in Blob Storage and General Purpose v2 (GPv2) accounts. General Purpose v1 (GPv1) accounts don't support tiering. You can easily convert your existing GPv1 or Blob Storage accounts to GPv2 accounts through the Azure portal. GPv2 provides new pricing and features for blobs, files, and queues. Some features and price cuts are only offered in GPv2 accounts. Some workloads can be more expensive on GPv2 than GPv1. For more information, see [Azure storage account overview](../common/storage-account-overview.md).
+
+Blob Storage and GPv2 accounts expose the **Access Tier** attribute at the account level. This attribute allows you to specify the default access tier for any blob that doesn't have it explicitly set at the object level. For objects with the tier explicitly set, the account tier won't apply. The archive tier can be applied only at the object level. You can switch between access tiers at any time.
+
+Use GPv2 instead of Blob Storage accounts for tiering. GPv2 supports all the features that Blob Storage accounts support, plus a lot more. Pricing between Blob Storage and GPv2 is almost identical, but some new features and price cuts are only available on GPv2 accounts.
-Blob Storage and GPv2 accounts expose the **Access Tier** attribute at the account level. This attribute allows you to specify the default access tier for any blob that doesn't have it explicit set at the object level. For objects with the tier set at the object level, the account tier won't apply. The archive tier can be applied only at the object level. You can switch between these access tiers at any time.
+Pricing structure between GPv1 and GPv2 accounts is different and customers should carefully evaluate both before deciding to use GPv2 accounts. You can easily convert an existing Blob Storage or GPv1 account to GPv2 through a simple one-click process. For more information, see [Azure storage account overview](../common/storage-account-overview.md).
## Hot access tier The hot access tier has higher storage costs than cool and archive tiers, but the lowest access costs. Example usage scenarios for the hot access tier include: -- Data that's in active use or expected to be accessed (read from and written to) frequently.-- Data that's staged for processing and eventual migration to the cool access tier.
+- Data that's in active use or is expected to be read from and written to frequently
+- Data that's staged for processing and eventual migration to the cool access tier
## Cool access tier The cool access tier has lower storage costs and higher access costs compared to hot storage. This tier is intended for data that will remain in the cool tier for at least 30 days. Example usage scenarios for the cool access tier include: -- Short-term backup and disaster recovery datasets.-- Older media content not viewed frequently anymore but is expected to be available immediately when accessed.-- Large data sets that need to be stored cost effectively while more data is being gathered for future processing. (*For example*, long-term storage of scientific data, raw telemetry data from a manufacturing facility)
+- Short-term backup and disaster recovery
+- Older data not used frequently but expected to be available immediately when accessed
+- Large data sets that need to be stored cost effectively, while more data is being gathered for future processing
## Archive access tier
-The archive access tier has the lowest storage cost. But it has higher data retrieval costs compared to the hot and cool tiers. Data must remain in the archive tier for at least 180 days or be subject to an early deletion charge. Data in the archive tier can take several hours to retrieve depending on the priority of the rehydration. For small objects, a high priority rehydrate may retrieve the object from archive in under 1 hour. See [Rehydrate blob data from the archive tier](storage-blob-rehydration.md) to learn more.
+The archive access tier has the lowest storage cost but higher data retrieval costs compared to hot and cool tiers. Data must remain in the archive tier for at least 180 days or be subject to an early deletion charge. Data in the archive tier can take several hours to retrieve depending on the specified rehydration priority. For small objects, a high priority rehydrate may retrieve the object from archive in under an hour. See [Rehydrate blob data from the archive tier](storage-blob-rehydration.md) to learn more.
-While a blob is in archive storage, the blob data is offline and can't be read, overwritten, or modified. To read or download a blob in archive, you must first rehydrate it to an online tier. You can't take snapshots of a blob in archive storage. However, the blob metadata remains online and available, allowing you to list the blob, its properties, metadata, and blob index tags. Setting or modifying the blob metadata while in archive is not allowed; however you may set and modify the blob index tags. For blobs in archive, the only valid operations are GetBlobProperties, GetBlobMetadata, SetBlobTags, GetBlobTags, FindBlobsByTags, ListBlobs, SetBlobTier, CopyBlob, and DeleteBlob.
+While a blob is in archive storage, the blob data is offline and can't be read or modified. To read or download a blob in archive, you must first rehydrate it to an online tier. You can't take snapshots of a blob in archive storage. However, the blob metadata remains online and available, allowing you to list the blob, its properties, metadata, and blob index tags. Setting or modifying the blob metadata while in archive isn't allowed. However, you can set and modify the blob index tags. For blobs in archive, the only valid operations are [Get Blob Properties](/rest/api/storageservices/get-blob-properties), [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata), [Set Blob Tags](/rest/api/storageservices/set-blob-tags), [Get Blob Tags](/rest/api/storageservices/get-blob-tags), [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags), [List Blobs](/rest/api/storageservices/list-blobs), [Set Blob Tier](/rest/api/storageservices/set-blob-tier), [Copy Blob](/rest/api/storageservices/copy-blob), and [Delete Blob](/rest/api/storageservices/delete-blob).
Example usage scenarios for the archive access tier include: - Long-term backup, secondary backup, and archival datasets-- Original (raw) data that must be preserved, even after it has been processed into final usable form.-- Compliance and archival data that needs to be stored for a long time and is hardly ever accessed.
+- Original (raw) data that must be preserved, even after it has been processed into final usable form
+- Compliance and archival data that needs to be stored for a long time and is hardly ever accessed
> [!NOTE]
-> The archive tier is not currently supported for ZRS, GZRS, or RA-GZRS accounts.
+> The archive tier is not supported for ZRS, GZRS, or RA-GZRS accounts.
## Account-level tiering
@@ -74,35 +90,39 @@ Blobs in all three access tiers can coexist within the same account. Any blob th
Changing the account access tier applies to all _access tier inferred_ objects stored in the account that don't have an explicit tier set. If you toggle the account tier from hot to cool, you'll be charged for write operations (per 10,000) for all blobs without a set tier in GPv2 accounts only. There's no charge for this change in Blob Storage accounts. You'll be charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from cool to hot in Blob Storage or GPv2 accounts.
+Only hot and cool access tiers can be set as the default account access tier. Archive can only be set at the object level. On blob upload, you can specify the access tier of your choice to be hot, cool, or archive regardless of the default account tier. This functionality allows you to write data directly into the archive tier to realize cost-savings from the moment you create data in blob storage.
+ ## Blob-level tiering
-Blob-level tiering allows you to upload data to the access tier of your choice using the [Put Blob](/rest/api/storageservices/put-blob) or [Put Block List](/rest/api/storageservices/put-block-list) operations and change the tier of your data at the object level using the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation or [Lifecycle management](#blob-lifecycle-management) feature. You can upload data to your required access tier then easily change the blob access tier among the hot, cool, or archive tiers as usage patterns change, without having to move data between accounts. All tier change requests happen immediately and tier changes between hot and cool are instantaneous. However, rehydrating a blob from archive can take several hours.
+Blob-level tiering allows you to upload data to the access tier of your choice using the [Put Blob](/rest/api/storageservices/put-blob) or [Put Block List](/rest/api/storageservices/put-block-list) operations and change the tier of your data at the object level using the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation or [lifecycle management](#blob-lifecycle-management) feature. You can upload data to your required access tier then easily change the blob access tier among the hot, cool, or archive tiers as usage patterns change, without having to move data between accounts. All tier change requests happen immediately and tier changes between hot and cool are instantaneous. Rehydrating a blob from the archive tier can take several hours.
-The time of the last blob tier change is exposed via the **Access Tier Change Time** blob property. When overwriting a blob in the hot or cool tier, the newly created blob inherits the tier of the blob that was overwritten unless the new blob access tier is explicitly set on creation. If a blob is in the archive tier, it can't be overwritten, so uploading the same blob isn't permitted in this scenario.
+The time of the last blob tier change is exposed via the **Access Tier Change Time** blob property. When overwriting a blob in the hot or cool tier, the newly created blob inherits the tier of the blob that was overwritten unless the new blob access tier is explicitly set on creation. If a blob is in the archive tier, it can't be overwritten, so uploading the same blob isn't permitted in this scenario.
> [!NOTE] > Archive storage and blob-level tiering only support block blobs. ### Blob lifecycle management
-Blob Storage lifecycle management offers a rich, rule-based policy that you can use to transition your data to the best access tier and to expire data at the end of its lifecycle. See [Manage the Azure Blob Storage lifecycle](storage-lifecycle-management-concepts.md) to learn more.
+Blob storage lifecycle management offers a rich, rule-based policy that you can use to transition your data to the best access tier and to expire data at the end of its lifecycle. See [Optimize costs by automating Azure Blob Storage access tiers](storage-lifecycle-management-concepts.md) to learn more.
> [!NOTE] > Data stored in a block blob storage account (Premium performance) cannot currently be tiered to hot, cool, or archive using [Set Blob Tier](/rest/api/storageservices/set-blob-tier) or using Azure Blob Storage lifecycle management. > To move data, you must synchronously copy blobs from the block blob storage account to the hot access tier in a different account using the [Put Block From URL API](/rest/api/storageservices/put-block-from-url) or a version of AzCopy that supports this API.
-> The *Put Block From URL* API synchronously copies data on the server, meaning the call completes only once all the data is moved from the original server location to the destination location.
+> The **Put Block From URL** API synchronously copies data on the server, meaning the call completes only once all the data is moved from the original server location to the destination location.
### Blob-level tiering billing
-When a blob is uploaded or moved to the hot, cool, or archive tier, it is charged at the corresponding rate immediately upon tier change.
+When a blob is uploaded or moved between tiers, it is charged at the corresponding rate immediately upon upload or tier change.
When a blob is moved to a cooler tier (hot->cool, hot->archive, or cool->archive), the operation is billed as a write operation to the destination tier, where the write operation (per 10,000) and data write (per GB) charges of the destination tier apply.
-When a blob is moved to a warmer tier (archive->cool, archive->hot, or cool->hot), the operation is billed as a read from the source tier, where the read operation (per 10,000) and data retrieval (per GB) charges of the source tier apply. Early deletion charges for any blob moved out of the cool or archive tier may apply as well. [Rehydrating data from archive](storage-blob-rehydration.md) takes time and data will be charged archive prices until the data is restored online and blob tier changes to hot or cool. The following table summarizes how tier changes are billed:
+When a blob is moved to a warmer tier (archive->cool, archive->hot, or cool->hot), the operation is billed as a read from the source tier, where the read operation (per 10,000) and data retrieval (per GB) charges of the source tier apply. [Early deletion](#cool-and-archive-early-deletion) charges for any blob moved out of the cool or archive tier may apply as well. [Rehydrating data from the archive tier](storage-blob-rehydration.md) takes time and data will be charged archive prices until the data is restored online and the blob tier changes to hot or cool.
-| | **Write Charges (Operation + Access)** | **Read Charges (Operation + Access)**
+The following table summarizes how tier changes are billed.
+
+| | **Write charges (operation + access)** | **Read charges (operation + access)** |
| ---- | ----- | ----- |
-| **SetBlobTier Direction** | hot->cool,<br> hot->archive,<br> cool->archive | archive->cool,<br> archive->hot,<br> cool->hot
+| **Set Blob Tier** | hot -> cool<br> hot -> archive<br> cool -> archive | archive -> cool<br> archive -> hot<br> cool -> hot
### Cool and archive early deletion
@@ -113,7 +133,7 @@ There are some details when moving between the cool and archive tiers:
1. If a blob is inferred as cool based on the storage account's default access tier and the blob is moved to archive, there is no early deletion charge. 1. If a blob is explicitly moved to the cool tier and then moved to archive, the early deletion charge applies.
-You may calculate the early deletion by using the blob property, **Last-Modified**, if there has been no access tier changes. Otherwise you can use when the access tier was last modified to cool or archive by viewing the blob property: **access-tier-change-time**. For more information on blob properties, see [Get Blob Properties](/rest/api/storageservices/get-blob-properties).
+Calculate the early deletion time by using the **Last-Modified** blob property, if there have been no access tier changes. Otherwise, use when the access tier was last modified to cool or archive by viewing the blob property: **access-tier-change-time**. For more information on blob properties, see [Get Blob Properties](/rest/api/storageservices/get-blob-properties).
## Comparing block blob storage options
@@ -124,174 +144,40 @@ The following table shows a comparison of premium performance block blob storage
| **Availability** | 99.9% | 99.9% | 99% | Offline | | **Availability** <br> **(RA-GRS reads)** | N/A | 99.99% | 99.9% | Offline | | **Usage charges** | Higher storage costs, lower access, and transaction cost | Higher storage costs, lower access, and transaction costs | Lower storage costs, higher access, and transaction costs | Lowest storage costs, highest access, and transaction costs |
-| **Minimum object size** | N/A | N/A | N/A | N/A |
| **Minimum storage duration** | N/A | N/A | 30 days<sup>1</sup> | 180 days | **Latency** <br> **(Time to first byte)** | Single-digit milliseconds | milliseconds | milliseconds | hours<sup>2</sup> | <sup>1</sup> Objects in the cool tier on GPv2 accounts have a minimum retention duration of 30 days. Blob Storage accounts don't have a minimum retention duration for the cool tier.
-<sup>2</sup> Archive Storage currently supports 2 rehydrate priorities, High and Standard, that offers different retrieval latencies. For more information, see [Rehydrate blob data from the archive tier](storage-blob-rehydration.md).
+<sup>2</sup> Archive Storage currently supports two rehydration priorities, high and standard, offering different retrieval latencies and costs. For more information, see [Rehydrate blob data from the archive tier](storage-blob-rehydration.md).
> [!NOTE] > Blob Storage accounts support the same performance and scalability targets as general-purpose v2 storage accounts. For more information, see [Scalability and performance targets for Blob Storage](scalability-targets.md).
-## Quickstart scenarios
-
-In this section, the following scenarios are demonstrated using the Azure portal and PowerShell:
--- How to change the default account access tier of a GPv2 or Blob Storage account.-- How to change the tier of a blob in a GPv2 or Blob Storage account.-
-### Change the default account access tier of a GPv2 or Blob Storage account
-
-# [Portal](#tab/azure-portal)
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the Azure portal, search for and select **All Resources**.
-
-1. Select your storage account.
-
-1. In **Settings**, select **Configuration** to view and change the account configuration.
-
-1. Select the right access tier for your needs: Set the **Access tier** to either **Cool** or **Hot**.
-
-1. Click **Save** at the top.
-
-![Change default account tier in Azure portal](media/storage-tiers/account-tier.png)
-
-# [PowerShell](#tab/azure-powershell)
-The following PowerShell script can be used to change the account tier. The `$rgName` variable must be initialized with your resource group name. The `$accountName` variable must be initialized with your storage account name.
-```powershell
-#Initialize the following with your resource group and storage account names
-$rgName = ""
-$accountName = ""
-
-#Change the storage account tier to hot
-Set-AzStorageAccount -ResourceGroupName $rgName -Name $accountName -AccessTier Hot
-```
-
-### Change the tier of a blob in a GPv2 or Blob Storage account
-# [Portal](#tab/azure-portal)
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the Azure portal, search for and select **All Resources**.
-
-1. Select your storage account.
-
-1. Select your container and then select your blob.
-
-1. In the **Blob properties**, select **Change tier**.
-
-1. Select the **Hot**, **Cool**, or **Archive** access tier. If your blob is currently in archive and you want to rehydrate to an online tier, you may also select a Rehydrate Priority of **Standard** or **High**.
-
-1. Select **Save** at the bottom.
-
-![Change blob tier in Azure portal](media/storage-tiers/blob-access-tier.png)
-
-# [PowerShell](#tab/azure-powershell)
-The following PowerShell script can be used to change the blob tier. The `$rgName` variable must be initialized with your resource group name. The `$accountName` variable must be initialized with your storage account name. The `$containerName` variable must be initialized with your container name. The `$blobName` variable must be initialized with your blob name.
-```powershell
-#Initialize the following with your resource group, storage account, container, and blob names
-$rgName = ""
-$accountName = ""
-$containerName = ""
-$blobName == ""
-
-#Select the storage account and get the context
-$storageAccount =Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName
-$ctx = $storageAccount.Context
-
-#Select the blob from a container
-$blob = Get-AzStorageBlob -Container $containerName -Blob $blobName -Context $ctx
-
-#Change the blobΓÇÖs access tier to archive
-$blob.ICloudBlob.SetStandardBlobTier("Archive")
-```
- ## Pricing and billing
-All storage accounts use a pricing model for Block blob storage based on the tier of each blob. Keep in mind the following billing considerations:
+All storage accounts use a pricing model for block blob storage based on the tier of each blob. Keep in mind the following billing considerations:
- **Storage costs**: In addition to the amount of data stored, the cost of storing data varies depending on the access tier. The per-gigabyte cost decreases as the tier gets cooler. - **Data access costs**: Data access charges increase as the tier gets cooler. For data in the cool and archive access tier, you're charged a per-gigabyte data access charge for reads. - **Transaction costs**: There's a per-transaction charge for all tiers that increases as the tier gets cooler.-- **Geo-Replication data transfer costs**: This charge only applies to accounts with geo-replication configured, including GRS and RA-GRS. Geo-replication data transfer incurs a per-gigabyte charge.
+- **Geo-replication data transfer costs**: This charge only applies to accounts with geo-replication configured, including GRS and RA-GRS. Geo-replication data transfer incurs a per-gigabyte charge.
- **Outbound data transfer costs**: Outbound data transfers (data that is transferred out of an Azure region) incur billing for bandwidth usage on a per-gigabyte basis, consistent with general-purpose storage accounts.-- **Changing the access tier**: Changing the account access tier will result in tier change charges for _access tier inferred_ blobs stored in the account that don't have an explicit tier set. For information on changing the access tier for a single blob, refer to [Blob-level tiering billing](#blob-level-tiering-billing).
+- **Changing the access tier**: Changing the account access tier will result in tier change charges for all blobs that don't have an explicit tier set. For information on changing the access tier for a single blob, see [Blob-level tiering billing](#blob-level-tiering-billing).
- Changing the access tier for a blob when versioning is enabled, or if the blob has snapshots, may result in additional charges. For more information about how you are billed when blob versioning is enabled and you explicitly change a blob's tier, see [Pricing and billing](versioning-overview.md#pricing-and-billing) in the documentation for blob versioning. For more information about how you are billed when a blob has snapshots and you explicitly change the blob's tier, see [Pricing and billing](snapshots-overview.md#pricing-and-billing) in the documentation for blob snapshots.
+ Changing the access tier for a blob when versioning is enabled, or if the blob has snapshots, may result in additional charges. For information about blobs with versioning enabled, see [Pricing and billing](versioning-overview.md#pricing-and-billing) in the blob versioning documentation. For information about blobs with snapshots, see [Pricing and billing](snapshots-overview.md#pricing-and-billing) in the blob snapshots documentation.
> [!NOTE]
-> For more information about pricing for Block blobs, see [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page. For more information on outbound data transfer charges, see [Data Transfers Pricing Details](https://azure.microsoft.com/pricing/details/data-transfers/) page.
-
-## FAQ
-
-**Should I use Blob Storage or GPv2 accounts if I want to tier my data?**
-
-We recommend you use GPv2 instead of Blob Storage accounts for tiering. GPv2 support all the features that Blob Storage accounts support plus a lot more. Pricing between Blob Storage and GPv2 is almost identical, but some new features and price cuts will only be available on GPv2 accounts. GPv1 accounts don't support tiering.
-
-Pricing structure between GPv1 and GPv2 accounts is different and customers should carefully evaluate both before deciding to use GPv2 accounts. You can easily convert an existing Blob Storage or GPv1 account to GPv2 through a simple one-click process. For more information, see [Azure storage account overview](../common/storage-account-overview.md).
-
-**Can I store objects in all three (hot, cool, and archive) access tiers in the same account?**
-
-Yes. The **Access Tier** attribute set at the account level is the default account tier that applies to all objects in that account without an explicit set tier. Blob-level tiering allows you to set the access tier on at the object level regardless of what the access tier setting on the account is. Blobs in any of the three access tiers (hot, cool, or archive) may exist within the same account.
-
-**Can I change the default access tier of my Blob or GPv2 storage account?**
-
-Yes, you can change the default account tier by setting the **Access tier** attribute on the storage account. Changing the account tier applies to all objects stored in the account that don't have an explicit tier set (for example, **Hot (inferred)** or **Cool (inferred)**). Toggling the account tier from hot to cool incurs write operations (per 10,000) for all blobs without a set tier in GPv2 accounts only and toggling from cool to hot incurs both read operations (per 10,000) and data retrieval (per GB) charges for all blobs in Blob Storage and GPv2 accounts.
-
-**Can I set my default account access tier to archive?**
-
-No. Only hot and cool access tiers may be set as the default account access tier. Archive can only be set at the object level. On blob upload, You specify the access tier of your choice to be hot, cool, or archive regardless of the default account tier. This functionality allows you to write data directly into the archive tier to realize cost-savings from the moment you create data in blob storage.
-
-**In which regions are the hot, cool, and archive access tiers available in?**
-
-The hot and cool access tiers along with blob-level tiering are available in all regions. Archive storage will initially only be available in select regions. For a complete list, see [Azure products available by region](https://azure.microsoft.com/regions/services/).
-
-**Which redundancy options are supported for the hot, cool, and archive access tiers?**
-
-The hot and cool tiers support all redundancy options. The archive tier supports only LRS, GRS, and RA-GRS. ZRS, GZRS, and RA-GZRS are not supported for the archive tier.
-
-**Do the blobs in the cool access tier behave differently than the ones in the hot access tier?**
-
-Blobs in the hot access tier have the same latency as blobs in GPv1, GPv2, and Blob Storage accounts. Blobs in the cool access tier have a similar latency (in milliseconds) as blobs in GPv1, GPv2, and Blob Storage accounts. Blobs in the archive access tier have several hours of latency in GPv1, GPv2, and Blob Storage accounts.
-
-Blobs in the cool access tier have a slightly lower availability service level (SLA) than the blobs stored in the hot access tier. For more information, see [SLA for storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
-
-**Are the operations among the hot, cool, and archive tiers the same?**
-
-All operations between hot and cool are 100% consistent. All valid archive operations including GetBlobProperties, GetBlobMetadata, SetBlobTags, GetBlobTags, FindBlobsByTags, ListBlobs, SetBlobTier, and DeleteBlob are 100% consistent with hot and cool. Blob data can't be read or modified while in the archive tier until rehydrated; only blob metadata read operations are supported while in archive. However blob index tags can be read, set, or modified while in archive.
-
-**When rehydrating a blob from archive tier to the hot or cool tier, how will I know when rehydration is complete?**
-
-During rehydration, you may use the get blob properties operation to poll the **Archive Status** attribute and confirm when the tier change is complete. The status reads "rehydrate-pending-to-hot" or "rehydrate-pending-to-cool" depending on the destination tier. Upon completion, the archive status property is removed, and the **Access Tier** blob property reflects the new hot or cool tier. See [Rehydrate blob data from the archive tier](storage-blob-rehydration.md) to learn more.
-
-**After setting the tier of a blob, when will I start getting billed at the appropriate rate?**
-
-Each blob is always billed according to the tier indicated by the blob's **Access Tier** property. When you set a new online tier for a blob, the **Access Tier** property immediately reflects the new tier for all transitions. However, rehydrating a blob from the offline archive tier to a hot or cool tier can take several hours. In this case, you're billed at archive rates until rehydration is complete, at which point the **Access Tier** property reflects the new tier. Once rehydrated to the online tier, you're billed for that blob at the hot or cool rate.
-
-**How do I determine if I'll incur an early deletion charge when deleting or moving a blob out of the cool or archive tier?**
-
-Any blob that is deleted or moved out of the cool (GPv2 accounts only) or archive tier before 30 days and 180 days respectively will incur a prorated early deletion charge. You can determine how long a blob has been in the cool or archive tier by checking the **Access Tier Change Time** blob property, which provides a stamp of the last tier change. If the blob's tier was never changed, you can check the **Last Modified** blob property. For more information, see [Cool and archive early deletion](#cool-and-archive-early-deletion).
-
-**Which Azure tools and SDKs support blob-level tiering and archive storage?**
-
-Azure portal, PowerShell, and CLI tools and .NET, Java, Python, and Node.js client libraries all support blob-level tiering and archive storage.
+> For more information about pricing for Block blobs, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/). For more information on outbound data transfer charges, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/) page.
-**How much data can I store in the hot, cool, and archive tiers?**
+## Availability
-Data storage along with other limits are set at the account level and not per access tier. You can choose to use all of your limit in one tier or across all three tiers. For more information, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+Different access tiers, along with blob-level tiering, are available in select regions. For a complete list, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage).
## Next steps
-Evaluate hot, cool, and archive in GPv2 and Blob Storage accounts
+Learn how to manage blobs and accounts across access tiers.
-- [Check availability of hot, cool, and archive by region](https://azure.microsoft.com/regions/#services)-- [Manage the Azure Blob Storage lifecycle](storage-lifecycle-management-concepts.md)-- [Learn about rehydrating blob data from the archive tier](storage-blob-rehydration.md)-- [Determine if premium performance would benefit your app](storage-blob-performance-tiers.md)-- [Evaluate usage of your current storage accounts by enabling Azure Storage metrics](./monitor-blob-storage.md)-- [Check hot, cool, and archive pricing in Blob Storage and GPv2 accounts by region](https://azure.microsoft.com/pricing/details/storage/)-- [Check data transfers pricing](https://azure.microsoft.com/pricing/details/data-transfers/)\ No newline at end of file
+- [How to manage the tier of a blob in an Azure Storage account](manage-access-tier.md)
+- [How to manage the default account access tier of an Azure Storage account](../common/manage-account-default-access-tier.md)
+- [Optimize costs by automating Azure Blob Storage access tiers](storage-lifecycle-management-concepts.md)
storage https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-static-site-github-actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blobs-static-site-github-actions.md
@@ -6,7 +6,7 @@ ms.service: storage
ms.topic: how-to ms.author: jukullam ms.reviewer: dineshm
-ms.date: 09/11/2020
+ms.date: 01/11/2021
ms.subservice: blobs ms.custom: devx-track-javascript, github-actions-azure, devx-track-azurecli
@@ -86,10 +86,10 @@ In the example above, replace the placeholders with your subscription ID and res
name: CI on:
- push:
- branches: [ master ]
- pull_request:
- branches: [ master ]
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
``` 1. Rename your workflow `Blob storage website CI` and add the checkout and login actions. These actions will checkout your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
@@ -98,10 +98,10 @@ In the example above, replace the placeholders with your subscription ID and res
name: Blob storage website CI on:
- push:
- branches: [ master ]
- pull_request:
- branches: [ master ]
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
jobs: build:
@@ -110,21 +110,21 @@ In the example above, replace the placeholders with your subscription ID and res
- uses: actions/checkout@v2 - uses: azure/login@v1 with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
``` 1. Use the Azure CLI action to upload your code to blob storage and to purge your CDN endpoint. For `az storage blob upload-batch`, replace the placeholder with your storage account name. The script will upload to the `$web` container. For `az cdn endpoint purge`, replace the placeholders with your CDN profile name, CDN endpoint name, and resource group. ```yaml - name: Upload to blob storage
- uses: azure/CLI@v1
- with:
+ uses: azure/CLI@v1
+ with:
azcliversion: 2.0.72 inlineScript: | az storage blob upload-batch --account-name <STORAGE_ACCOUNT_NAME> -d '$web' -s . - name: Purge CDN endpoint
- uses: azure/CLI@v1
- with:
+ uses: azure/CLI@v1
+ with:
azcliversion: 2.0.72 inlineScript: | az cdn endpoint purge --content-paths "/*" --profile-name "CDN_PROFILE_NAME" --name "CDN_ENDPOINT" --resource-group "RESOURCE_GROUP"
@@ -133,36 +133,37 @@ In the example above, replace the placeholders with your subscription ID and res
1. Complete your workflow by adding an action to logout of Azure. Here is the completed workflow. The file will appear in the `.github/workflows` folder of your repository. ```yaml
- name: Blob storage website CI
+ name: Blob storage website CI
on:
- push:
- branches: [ master ]
- pull_request:
- branches: [ master ]
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
jobs:
- build:
+ build:
runs-on: ubuntu-latest
- steps:
+ steps:
- uses: actions/checkout@v2
- - name: Azure Login
- uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
- - name: Azure CLI script
- uses: azure/CLI@v1
- with:
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Upload to blob storage
+ uses: azure/CLI@v1
+ with:
azcliversion: 2.0.72 inlineScript: | az storage blob upload-batch --account-name <STORAGE_ACCOUNT_NAME> -d '$web' -s .
- - name: Azure CLI script
- uses: azure/CLI@v1
- with:
+ - name: Purge CDN endpoint
+ uses: azure/CLI@v1
+ with:
azcliversion: 2.0.72 inlineScript: | az cdn endpoint purge --content-paths "/*" --profile-name "CDN_PROFILE_NAME" --name "CDN_ENDPOINT" --resource-group "RESOURCE_GROUP"
- # Azure logout
+
+ # Azure logout
- name: logout run: | az logout
storage https://docs.microsoft.com/en-us/azure/storage/common/customer-managed-keys-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-overview.md
@@ -74,7 +74,7 @@ When you configure encryption with customer-managed keys, you have two options f
When the key version is explicitly specified, then you must manually update the storage account to use the new key version URI when a new version is created. To learn how to update the storage account to use a new version of the key, see [Configure encryption with customer-managed keys stored in Azure Key Vault](customer-managed-keys-configure-key-vault.md) or [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM (preview)](customer-managed-keys-configure-key-vault-hsm.md).
-Updating the key version for a customer-managed key does not trigger re-encryption of data in the storage account. There is no further action required from the user.
+When you update the key version, the protection of the root encryption key changes, but the data in your Azure Storage account is not re-encrypted. There is no further action required from the user.
> [!NOTE] > To rotate a key, create a new version of the key in the key vault or managed HSM, according to your compliance policies. You can rotate your key manually or create a function to rotate it on a schedule.
storage https://docs.microsoft.com/en-us/azure/storage/common/manage-account-default-access-tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/manage-account-default-access-tier.md new file mode 100644
@@ -0,0 +1,63 @@
+---
+title: Manage the default access tier of an Azure Storage account
+description: Learn how to change the default access tier of a GPv2 or Blob Storage account
+author: mhopkins-msft
+
+ms.author: mhopkins
+ms.date: 01/11/2021
+ms.service: storage
+ms.subservice: common
+ms.topic: how-to
+ms.reviewer: klaasl
+---
+
+# Manage the default access tier of an Azure Storage account
+
+Each Azure Storage account has a default access tier, either hot, cool, or archive. You assign the access tier when you create a storage account. The default access tier is hot.
+
+You can change the default account tier by setting the **Access tier** attribute on the storage account. Changing the account tier applies to all objects stored in the account that don't have an explicit tier set. Toggling the account tier from hot to cool incurs write operations (per 10,000) for all blobs without a set tier in GPv2 accounts only and toggling from cool to hot incurs both read operations (per 10,000) and data retrieval (per GB) charges for all blobs in Blob Storage and GPv2 accounts.
+
+For blobs with the tier set at the object level, the account tier doesn't apply. The archive tier can only be applied at the object level. You can switch between access tiers at any time.
+
+You can change the default access tier after a storage account has been created by following the steps below.
+
+## Change the default account access tier of a GPv2 or Blob Storage account
+
+The following scenarios use the Azure portal or PowerShell:
+
+# [Portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the Azure portal, search for and select **All Resources**.
+
+1. Select your storage account.
+
+1. In **Settings**, select **Configuration** to view and change the account configuration.
+
+1. Select the right access tier for your needs: Set the **Access tier** to either **Cool** or **Hot**.
+
+1. Click **Save** at the top.
+
+![Change default account tier in Azure portal](media/manage-account-default-access-tier/account-tier.png)
+
+# [PowerShell](#tab/powershell)
+
+The following PowerShell script can be used to change the account tier. The `$rgName` variable must be initialized with your resource group name. The `$accountName` variable must be initialized with your storage account name.
+
+```powershell
+#Initialize the following with your resource group and storage account names
+$rgName = ""
+$accountName = ""
+
+#Change the storage account tier to hot
+Set-AzStorageAccount -ResourceGroupName $rgName -Name $accountName -AccessTier Hot
+```
+
+---
+
+## Next steps
+
+- [How to manage the tier of a blob in an Azure Storage account](../blobs/manage-access-tier.md)
+- [Determine if premium performance would benefit your app](../blobs/storage-blob-performance-tiers.md)
+- [Evaluate usage of your current storage accounts by enabling Azure Storage metrics](../blobs/monitor-blob-storage.md)
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-redundancy.md
@@ -7,7 +7,7 @@ author: tamram
ms.service: storage ms.topic: conceptual
-ms.date: 01/08/2021
+ms.date: 01/13/2021
ms.author: tamram ms.subservice: common ---
@@ -151,7 +151,7 @@ The following table describes key parameters for each redundancy option:
| Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) | | Availability for read requests | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) for GRS<br /><br />At least 99.99% (99.9% for cool access tier) for RA-GRS | At least 99.9% (99% for cool access tier) for GZRS<br /><br />At least 99.99% (99.9% for cool access tier) for RA-GZRS | | Availability for write requests | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) |
-| Number of copies of data maintained on separate nodes. | 3 | 3 | 6 | 6 |
+| Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region |
### Durability and availability by outage scenario
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-troubleshoot.md
@@ -4,7 +4,7 @@ description: Troubleshoot common issues in a deployment on Azure File Sync, whic
author: jeffpatt24 ms.service: storage ms.topic: troubleshooting
-ms.date: 6/12/2020
+ms.date: 1/13/2021
ms.author: jeffpatt ms.subservice: files ---
@@ -195,10 +195,27 @@ On the server that is showing as "Appears offline" in the portal, look at Event
- If **GetNextJob completed with status: 0** is logged, the server can communicate with the Azure File Sync service. - Open Task Manager on the server and verify the Storage Sync Monitor (AzureStorageSyncMonitor.exe) process is running. If the process is not running, first try restarting the server. If restarting the server does not resolve the issue, upgrade to the latest Azure File Sync [agent version](./storage-files-release-notes.md). -- If **GetNextJob completed with status: -2134347756** is logged, the server is unable to communicate with the Azure File Sync service due to a firewall or proxy.
+- If **GetNextJob completed with status: -2134347756** is logged, the server is unable to communicate with the Azure File Sync service due to a firewall, proxy or TLS cipher suite order configuration.
- If the server is behind a firewall, verify port 443 outbound is allowed. If the firewall restricts traffic to specific domains, confirm the domains listed in the Firewall [documentation](./storage-sync-files-firewall-and-proxy.md#firewall) are accessible. - If the server is behind a proxy, configure the machine-wide or app-specific proxy settings by following the steps in the Proxy [documentation](./storage-sync-files-firewall-and-proxy.md#proxy). - Use the Test-StorageSyncNetworkConnectivity cmdlet to check network connectivity to the service endpoints. To learn more, see [Test network connectivity to service endpoints](./storage-sync-files-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints).
+ - To add cipher suites on the server, use group policy or TLS cmdlets:
+ - To use group policy, see [Configuring TLS Cipher Suite Order by using Group Policy](https://docs.microsoft.com/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-group-policy).
+ - To use TLS cmdlets, see [Configuring TLS Cipher Suite Order by using TLS PowerShell Cmdlets](https://docs.microsoft.com/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-tls-powershell-cmdlets).
+
+ Azure File Sync currently supports the following cipher suites for TLS 1.2 protocol:
+ - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P384
+ - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256
+ - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P384
+ - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P256
+ - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256
+ - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256
+ - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256
+ - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256
+ - TLS_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_RSA_WITH_AES_256_CBC_SHA256
+ - TLS_RSA_WITH_AES_128_CBC_SHA256
- If **GetNextJob completed with status: -2134347764** is logged, the server is unable to communicate with the Azure File Sync service due to an expired or deleted certificate. - Run the following PowerShell command on the server to reset the certificate used for authentication:
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-managed-private-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-managed-private-endpoints.md
@@ -5,7 +5,7 @@ author: RonyMSFT
ms.service: synapse-analytics ms.topic: overview ms.subservice: security
-ms.date: 10/16/2020
+ms.date: 01/12/2020
ms.author: ronytho ms.reviewer: jrasnick ---
@@ -16,13 +16,9 @@ This article will explain Managed private endpoints in Azure Synapse Analytics.
## Managed private endpoints
-Managed private endpoints are private endpoints created in the Managed workspace Microsoft Azure Virtual Network establishing a private link to Azure resources. Azure Synapse manages these private endpoints on your behalf.
+Managed private endpoints are private endpoints created in a Managed Virtual Network associated with your Azure Synapse workspace. Managed private endpoints establish a private link to Azure resources. Azure Synapse manages these private endpoints on your behalf. You can create Managed private endpoints from your Azure Synapse workspace to access Azure services (such as Azure Storage or Azure Cosmos DB) and Azure hosted customer/partner services.
-Azure Synapse supports private links. Private link enables you to access Azure services (such as Azure Storage and Azure Cosmos DB) and Azure hosted customer/partner services from your Azure Virtual Network securely.
-
-When you use a private link, traffic between your Virtual Network and workspace traverses entirely over the Microsoft backbone network. Private Link protects against data exfiltration risks. You establish a private link to a resource by creating a private endpoint.
-
-Private endpoint uses a private IP address from your Virtual Network to effectively bring the service into your Virtual Network. Private endpoints are mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to a specific resource approved by their organization.
+When you Managed private endpoints, traffic between your Azure Synapse workspace and other Azure resources traverse entirely over the Microsoft backbone network. Managed private endpoints protect against data exfiltration. A Managed private endpoint uses private IP address from your Managed Virtual Network to effectively bring the Azure service that your Azure Synapse workspace is communicating into your Virtual Network. Managed private endpoints are mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to a specific resource approved by their organization.
Learn more about [private links and private endpoints](../../private-link/index.yml).
@@ -30,13 +26,10 @@ Learn more about [private links and private endpoints](../../private-link/index.
>Managed private endpoints are only supported in Azure Synapse workspaces with a Managed workspace Virtual Network. >[!NOTE]
->All outbound traffic from the Managed workspace Virtual Network except through Managed private endpoints will be blocked in the future. It's recommended that you create Managed private endpoints to connect to all your Azure data sources external to the workspace.
-
-A private endpoint connection is created in a "Pending" state when you create a Managed private endpoint in Azure Synapse. An approval workflow is started. The private link resource owner is responsible to approve or reject the connection.
+>When creating an Azure Synapse workspace, you can choose to associate a Managed Virtual Network to it. If you choose to have a Managed Virtual Network associated to your workspace, you can also choose to limit outbound traffic from your workspace to only approved targets. You must create Managed private endpoints to these targets.
-If the owner approves the connection, the private link is established. But, if the owner doesn't approve the connection, then the private link won't be established. In either case, the Managed private endpoint will be updated with the status of the connection.
-Only a Managed private endpoint in an approved state can send traffic to a given private link resource.
+A private endpoint connection is created in a "Pending" state when you create a Managed private endpoint in Azure Synapse. An approval workflow is started. The private link resource owner is responsible to approve or reject the connection. If the owner approves the connection, the private link is established. But, if the owner doesn't approve the connection, then the private link won't be established. In either case, the Managed private endpoint will be updated with the status of the connection. Only a Managed private endpoint in an approved state can be used to send traffic to the private link resource that is linked to the Managed private endpoint.
## Managed private endpoints for dedicated SQL pool and serverless SQL pool
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/best-practices-sql-on-demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/best-practices-sql-on-demand.md
@@ -20,9 +20,9 @@ In this article, you'll find a collection of best practices for using serverless
Serverless SQL pool allows you to query files in your Azure storage accounts. It doesn't have local storage or ingestion capabilities. So all files that the query targets are external to serverless SQL pool. Everything related to reading files from storage might have an impact on query performance.
-## Colocate your Azure storage account and serverless SQL pool
+## Colocate your storage and serverless SQL pool
-To minimize latency, colocate your Azure storage account and your serverless SQL pool endpoint. Storage accounts and endpoints provisioned during workspace creation are located in the same region.
+To minimize latency, colocate your Azure storage account or CosmosDB analytic storage and your serverless SQL pool endpoint. Storage accounts and endpoints provisioned during workspace creation are located in the same region.
For optimal performance, if you access other storage accounts with serverless SQL pool, make sure they're in the same region. If they aren't in the same region, there will be increased latency for the data's network transfer between the remote region and the endpoint's region.
@@ -39,9 +39,9 @@ When throttling is detected, serverless SQL pool has built-in handling to resolv
If possible, you can prepare files for better performance: -- Convert CSV and JSON to Parquet. Parquet is a columnar format. Because it's compressed, its file sizes are smaller than CSV or JSON files that contain the same data. Serverless SQL pool will need less time and fewer storage requests to read it.
+- Convert large CSV and JSON to Parquet. Parquet is a columnar format. Because it's compressed, its file sizes are smaller than CSV or JSON files that contain the same data. Serverless SQL pool is able to skip the columns and rows that are not needed in query if you are reading Parquet files. Serverless SQL pool will need less time and fewer storage requests to read it.
- If a query targets a single large file, you'll benefit from splitting it into multiple smaller files.-- Try to keep your CSV file size below 10 GB.
+- Try to keep your CSV file size between 100 MB and 10 GB.
- It's better to have equally sized files for a single OPENROWSET path or an external table LOCATION. - Partition your data by storing partitions to different folders or file names. See [Use filename and filepath functions to target specific partitions](#use-filename-and-filepath-functions-to-target-specific-partitions).
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/create-external-table-as-select https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/create-external-table-as-select.md
@@ -69,6 +69,9 @@ FROM
```
+> [!NOTE]
+> You must modify this script and change the target location to execute it again. External tables cannot be created on the location where you already have some data.
+ ## Use the external table You can use the external table created through CETAS like a regular external table.
@@ -89,6 +92,14 @@ ORDER BY
[population] DESC; ```
+## Remarks
+
+Once you store your results, the data in the external table cannot be modified. You cannot repeat this script because CETAS will not overwrite the underlying data created in the previous execution. Vote for the following feedback items if some of these are required in your scenarios, or propose the new ones on Azure feedback site:
+- [Enable inserting new data into external table](https://feedback.azure.com/forums/307516-azure-synapse-analytics/suggestions/32981347-polybase-allow-insert-new-data-to-existing-exteran)
+- [Enable deleting data from external table](https://feedback.azure.com/forums/307516-azure-synapse-analytics/suggestions/15158034-polybase-delete-from-external-tables)
+- [Specify partitions in CETAS](https://feedback.azure.com/forums/307516-azure-synapse-analytics/suggestions/19520860-polybase-partitioned-by-functionality-when-creati)
+- [Specify file sizes and counts](https://feedback.azure.com/forums/307516-azure-synapse-analytics/suggestions/42263617-cetas-specify-number-of-parquet-files-file-size)
+ ## Next steps For more information on how to query different file types, see the [Query single CSV file](query-single-csv-file.md), [Query Parquet files](query-parquet-files.md), and [Query JSON files](query-json-files.md) articles.
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/quickstart-create-traffic-manager-profile-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/quickstart-create-traffic-manager-profile-cli.md
@@ -28,7 +28,7 @@ In this quickstart, you'll create two instances of a web application. Each of th
- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Create a resource group
-Create a resource group with [az group create](https://docs.microsoft.com/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed.
+Create a resource group with [az group create](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed.
The following example creates a resource group named *myResourceGroup* in the *eastus* location:
@@ -42,7 +42,7 @@ The following example creates a resource group named *myResourceGroup* in the *e
## Create a Traffic Manager profile
-Create a Traffic Manager profile using [az network traffic-manager profile create](https://docs.microsoft.com/cli/azure/network/traffic-manager/profile?view=azure-cli-latest#az-network-traffic-manager-profile-create) that directs user traffic based on endpoint priority.
+Create a Traffic Manager profile using [az network traffic-manager profile create](/cli/azure/network/traffic-manager/profile?view=azure-cli-latest#az-network-traffic-manager-profile-create) that directs user traffic based on endpoint priority.
In the following example, replace **<profile_name>** with a unique Traffic Manager profile name.
@@ -65,7 +65,7 @@ az network traffic-manager profile create \
For this quickstart, you'll need two instances of a web application deployed in two different Azure regions (*East US* and *West Europe*). Each will serve as primary and failover endpoints for Traffic Manager. ### Create web app service plans
-Create web app service plans using [az appservice plan create](https://docs.microsoft.com/cli/azure/appservice/plan?view=azure-cli-latest#az-appservice-plan-create) for the two instances of the web application that you will deploy in two different Azure regions.
+Create web app service plans using [az appservice plan create](/cli/azure/appservice/plan?view=azure-cli-latest#az-appservice-plan-create) for the two instances of the web application that you will deploy in two different Azure regions.
In the following example, replace **<appspname_eastus>** and **<appspname_westeurope>** with a unique App Service Plan Name
@@ -86,7 +86,7 @@ az appservice plan create \
``` ### Create a web app in the app service plan
-Create two instances the web application using [az webapp create](https://docs.microsoft.com/cli/azure/webapp?view=azure-cli-latest#az-webapp-create) in the App Service plans in the *East US* and *West Europe* Azure regions.
+Create two instances the web application using [az webapp create](/cli/azure/webapp?view=azure-cli-latest#az-webapp-create) in the App Service plans in the *East US* and *West Europe* Azure regions.
In the following example, replace **<app1name_eastus>** and **<app2name_westeurope>** with a unique App Name, and replace **<appspname_eastus>** and **<appspname_westeurope>** with the name used to create the App Service plans in the previous section.
@@ -105,7 +105,7 @@ az webapp create \
``` ## Add Traffic Manager endpoints
-Add the two Web Apps as Traffic Manager endpoints using [az network traffic-manager endpoint create](https://docs.microsoft.com/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-create) to the Traffic Manager profile as follows:
+Add the two Web Apps as Traffic Manager endpoints using [az network traffic-manager endpoint create](/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-create) to the Traffic Manager profile as follows:
- Determine the Web App ID and add the Web App located in the *East US* Azure region as the primary endpoint to route all the user traffic. - Determine the Web App ID and add the Web App located in the *West Europe* Azure region as the failover endpoint.
@@ -173,7 +173,7 @@ In the following example, replace **<app1name_eastus>** and **<app2name_westeuro
### Determine the DNS name
-Determine the DNS name of the Traffic Manager profile using [az network traffic-manager profile show](https://docs.microsoft.com/cli/azure/network/traffic-manager/profile?view=azure-cli-latest#az-network-traffic-manager-profile-show).
+Determine the DNS name of the Traffic Manager profile using [az network traffic-manager profile show](/cli/azure/network/traffic-manager/profile?view=azure-cli-latest#az-network-traffic-manager-profile-show).
```azurecli-interactive
@@ -191,7 +191,7 @@ Copy the **RelativeDnsName** value. The DNS name of your Traffic Manager profile
> [!NOTE] > In this quickstart scenario, all requests route to the primary endpoint. It is set to **Priority 1**.
-2. To view Traffic Manager failover in action, disable your primary site using [az network traffic-manager endpoint update](https://docs.microsoft.com/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-update).
+2. To view Traffic Manager failover in action, disable your primary site using [az network traffic-manager endpoint update](/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-update).
```azurecli-interactive
@@ -209,7 +209,7 @@ Copy the **RelativeDnsName** value. The DNS name of your Traffic Manager profile
## Clean up resources
-When you're done, delete the resource groups, web applications, and all related resources using [az group delete](https://docs.microsoft.com/cli/azure/group?view=azure-cli-latest#az-group-delete).
+When you're done, delete the resource groups, web applications, and all related resources using [az group delete](/cli/azure/group?view=azure-cli-latest#az-group-delete).
```azurecli-interactive
@@ -223,4 +223,4 @@ az group delete \
In this quickstart, you created a Traffic Manager profile that provides high availability for your web application. To learn more about routing traffic, continue to the Traffic Manager tutorials. > [!div class="nextstepaction"]
-> [Traffic Manager tutorials](tutorial-traffic-manager-improve-website-response.md)
+> [Traffic Manager tutorials](tutorial-traffic-manager-improve-website-response.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/scripts/traffic-manager-cli-websites-high-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/scripts/traffic-manager-cli-websites-high-availability.md
@@ -46,14 +46,14 @@ This script uses the following commands to create a resource group, web app, tra
| Command | Notes | |---|---|
-| [az group create](https://docs.microsoft.com/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az appservice plan create](https://docs.microsoft.com/cli/azure/appservice/plan) | Creates an App Service plan. This is like a server farm for your Azure web app. |
-| [az webapp web create](https://docs.microsoft.com/cli/azure/webapp#az-webapp-create) | Creates an Azure web app within the App Service plan. |
-| [az network traffic-manager profile create](https://docs.microsoft.com/cli/azure/network/traffic-manager/profile) | Creates an Azure Traffic Manager profile. |
-| [az network traffic-manager endpoint create](https://docs.microsoft.com/cli/azure/network/traffic-manager/endpoint) | Adds an endpoint to an Azure Traffic Manager Profile. |
+| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
+| [az appservice plan create](/cli/azure/appservice/plan) | Creates an App Service plan. This is like a server farm for your Azure web app. |
+| [az webapp web create](/cli/azure/webapp#az-webapp-create) | Creates an Azure web app within the App Service plan. |
+| [az network traffic-manager profile create](/cli/azure/network/traffic-manager/profile) | Creates an Azure Traffic Manager profile. |
+| [az network traffic-manager endpoint create](/cli/azure/network/traffic-manager/endpoint) | Adds an endpoint to an Azure Traffic Manager Profile. |
## Next steps
-For more information on the Azure CLI, see [Azure CLI documentation](https://docs.microsoft.com/cli/azure).
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional App Service CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional App Service CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/scripts/traffic-manager-powershell-websites-high-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/scripts/traffic-manager-powershell-websites-high-availability.md
@@ -56,6 +56,6 @@ This script uses the following commands to create a resource group, web app, tra
## Next steps
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](https://docs.microsoft.com/powershell/azure/).
+For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-Additional networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
+Additional networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/security-baseline.md
@@ -74,7 +74,7 @@ Alternatively, you can enable and on-board data to Azure Sentinel or a third-par
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) -- [Getting started with Log Analytics queries](../azure-monitor/log-query/get-started-portal.md)
+- [Getting started with Log Analytics queries](../azure-monitor/log-query/log-analytics-tutorial.md)
- [How to perform custom queries in Azure Monitor](../azure-monitor/log-query/get-started-queries.md)
@@ -114,9 +114,9 @@ In Resource Manager, endpoints from any subscription can be added to Traffic Man
- [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md) -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
**Azure Security Center monitoring**: Yes
@@ -134,7 +134,7 @@ Additionally, to help you keep track of dedicated administrative accounts, you c
You can also enable a Just-In-Time access by using Azure AD Privileged Identity Management and Azure Resource Manager. -- [Learn more about Privileged Identity Management](/azure/active-directory/privileged-identity-management/)
+- [Learn more about Privileged Identity Management](../active-directory/privileged-identity-management/index.yml)
- [How to use Azure Policy](../governance/policy/tutorials/create-and-manage.md)
@@ -168,7 +168,7 @@ You can also enable a Just-In-Time access by using Azure AD Privileged Identity
**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges. -- [Understand secure, Azure-managed workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand secure, Azure-managed workstations](/security/compass/concept-azure-managed-workstation)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
@@ -182,9 +182,9 @@ You can also enable a Just-In-Time access by using Azure AD Privileged Identity
In addition, use Azure AD risk detection to view alerts and reports on risky user behavior. -- [How to deploy Privileged Identity Management](/azure/active-directory/privileged-identity-management/pim-deployment-plan)
+- [How to deploy Privileged Identity Management](../active-directory/privileged-identity-management/pim-deployment-plan.md)
-- [Understand Azure AD risk detection](/azure/active-directory/reports-monitoring/concept-risk-events)
+- [Understand Azure AD risk detection](../active-directory/identity-protection/overview-identity-protection.md)
**Azure Security Center monitoring**: Yes
@@ -214,7 +214,7 @@ In addition, use Azure AD risk detection to view alerts and reports on risky use
**Guidance**: Azure AD provides logs to help discover stale accounts. In addition, use Azure AD identity and access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
@@ -228,7 +228,7 @@ In addition, use Azure AD risk detection to view alerts and reports on risky use
You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired alerts within Log Analytics workspace. -- [How to integrate Azure activity logs with Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure activity logs with Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
**Azure Security Center monitoring**: Not applicable
@@ -238,7 +238,7 @@ You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Use Azure AD Identity Protection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation. -- [How to view Azure AD risky sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+- [How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
- [How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
@@ -256,7 +256,7 @@ You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Use tags to assist in tracking Azure resources that store or process sensitive information. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Azure Security Center monitoring**: Not applicable
@@ -274,9 +274,9 @@ Azure Traffic Manager has a predefined Azure role called "Traffic Manager Contri
- [Traffic Manager Contributor role](../role-based-access-control/built-in-roles.md#traffic-manager-contributor) -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
**Azure Security Center monitoring**: Not applicable
@@ -304,7 +304,7 @@ Although classic Azure resources may be discovered via Azure Resource Graph Expl
- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-3.0.0)
+- [How to view your Azure subscriptions](/powershell/module/az.accounts/get-azsubscription?view=azps-3.0.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
@@ -316,7 +316,7 @@ Although classic Azure resources may be discovered via Azure Resource Graph Expl
**Guidance**: Use Policy Name, Description, and Category to logically organize assets according to a taxonomy. -- [For more information about tagging assets, see Resource naming and tagging decision guide](https://docs.microsoft.com/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json)
+- [For more information about tagging assets, see Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=%2fazure%2fazure-resource-manager%2fmanagement%2ftoc.json)
**Azure Security Center monitoring**: Not applicable
@@ -334,11 +334,11 @@ In addition, use Azure Policy to put restrictions on the type of resources that
You can also create custom Azure Policy definitions to restrict more granular resource settings. -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
-- [How to create management groups](/azure/governance/management-groups/create)
+- [How to create management groups](../governance/management-groups/create-management-group-portal.md)
-- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Azure Security Center monitoring**: Not applicable
@@ -377,7 +377,7 @@ You can also create custom Azure Policy definitions to restrict more granular re
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](/azure/governance/policy/samples/not-allowed-resource-types)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/index.md)
**Azure Security Center monitoring**: Yes
@@ -401,7 +401,7 @@ You can also create custom Azure Policy definitions to restrict more granular re
**Guidance**: Define and implement standard security configurations for Azure Traffic Manager with Azure Policy. Use Azure Policy aliases in the "Microsoft.Network" namespace to create custom policies to audit or enforce the configuration of your Recovery Services vaults. -- [How to view available Azure Policy aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
+- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
@@ -425,9 +425,9 @@ You can also create custom Azure Policy definitions to restrict more granular re
**Guidance**: If you are using custom Azure Policy definitions, use Azure DevOps or Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/index?view=azure-devops)
+- [Azure Repos Documentation](/azure/devops/repos/index?view=azure-devops)
**Azure Security Center monitoring**: Not applicable
@@ -493,7 +493,7 @@ Additionally, mark subscriptions using tags and create a naming system to identi
- [Security alerts in Azure Security Center](../security-center/security-center-alerts-overview.md) -- [Use tags to organize your Azure resources](/azure/azure-resource-manager/resource-group-using-tags)
+- [Use tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md)
**Azure Security Center monitoring**: Yes
@@ -559,5 +559,5 @@ Additionally, mark subscriptions using tags and create a naming system to identi
## Next steps -- See the [Azure security benchmark](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure security benchmark](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-FAQs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-FAQs.md
@@ -92,7 +92,7 @@ The key difference between these two popular routing methods is that in Performa
### What are the regions that are supported by Traffic Manager for geographic routing?
-The country/region hierarchy that is used by Traffic Manager can be found [here](traffic-manager-geographic-regions.md). While this page is kept up-to-date with any changes, you can also programmatically retrieve the same information by using the [Azure Traffic Manager REST API](https://docs.microsoft.com/rest/api/trafficmanager/).
+The country/region hierarchy that is used by Traffic Manager can be found [here](traffic-manager-geographic-regions.md). While this page is kept up-to-date with any changes, you can also programmatically retrieve the same information by using the [Azure Traffic Manager REST API](/rest/api/trafficmanager/).
### How does traffic manager determine where a user is querying from?
@@ -113,11 +113,11 @@ No, the location of the endpoint imposes no restrictions on which regions can be
### Can I assign geographic regions to endpoints in a profile that is not configured to do geographic routing?
-Yes, if the routing method of a profile is not geographic, you can use the [Azure Traffic Manager REST API](https://docs.microsoft.com/rest/api/trafficmanager/) to assign geographic regions to endpoints in that profile. In the case of non-geographic routing type profiles, this configuration is ignored. If you change such a profile to geographic routing type at a later time, Traffic Manager can use those mappings.
+Yes, if the routing method of a profile is not geographic, you can use the [Azure Traffic Manager REST API](/rest/api/trafficmanager/) to assign geographic regions to endpoints in that profile. In the case of non-geographic routing type profiles, this configuration is ignored. If you change such a profile to geographic routing type at a later time, Traffic Manager can use those mappings.
### Why am I getting an error when I try to change the routing method of an existing profile to Geographic?
-All the endpoints under a profile with geographic routing need to have at least one region mapped to it. To convert an existing profile to geographic routing type, you first need to associate geographic regions to all its endpoints using the [Azure Traffic Manager REST API](https://docs.microsoft.com/rest/api/trafficmanager/) before changing the routing type to geographic. If using portal, first delete the endpoints, change the routing method of the profile to geographic and then add the endpoints along with their geographic region mapping.
+All the endpoints under a profile with geographic routing need to have at least one region mapped to it. To convert an existing profile to geographic routing type, you first need to associate geographic regions to all its endpoints using the [Azure Traffic Manager REST API](/rest/api/trafficmanager/) before changing the routing type to geographic. If using portal, first delete the endpoints, change the routing method of the profile to geographic and then add the endpoints along with their geographic region mapping.
### Why is it strongly recommended that customers create nested profiles instead of endpoints under a profile with geographic routing enabled?
@@ -303,7 +303,7 @@ Traffic View pricing is based on the number of data points used to create the ou
Using endpoints from multiple subscriptions is not possible with Azure Web Apps. Azure Web Apps requires that any custom domain name used with Web Apps is only used within a single subscription. It is not possible to use Web Apps from multiple subscriptions with the same domain name.
-For other endpoint types, it is possible to use Traffic Manager with endpoints from more than one subscription. In Resource Manager, endpoints from any subscription can be added to Traffic Manager, as long as the person configuring the Traffic Manager profile has read access to the endpoint. These permissions can be granted using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). Endpoints from other subscriptions can be added using [Azure PowerShell](https://docs.microsoft.com/powershell/module/az.trafficmanager/new-aztrafficmanagerendpoint) or the [Azure CLI](https://docs.microsoft.com/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-create).
+For other endpoint types, it is possible to use Traffic Manager with endpoints from more than one subscription. In Resource Manager, endpoints from any subscription can be added to Traffic Manager, as long as the person configuring the Traffic Manager profile has read access to the endpoint. These permissions can be granted using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). Endpoints from other subscriptions can be added using [Azure PowerShell](/powershell/module/az.trafficmanager/new-aztrafficmanagerendpoint) or the [Azure CLI](/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-create).
### Can I use Traffic Manager with Cloud Service 'Staging' slots?
@@ -342,9 +342,9 @@ Azure Resource Manager requires all resource groups to specify a location, which
### How do I determine the current health of each endpoint?
-The current monitoring status of each endpoint, in addition to the overall profile, is displayed in the Azure portal. This information also is available via the Traffic Monitor [REST API](https://msdn.microsoft.com/library/azure/mt163667.aspx), [PowerShell cmdlets](https://docs.microsoft.com/powershell/module/az.trafficmanager), and [cross-platform Azure CLI](../cli-install-nodejs.md).
+The current monitoring status of each endpoint, in addition to the overall profile, is displayed in the Azure portal. This information also is available via the Traffic Monitor [REST API](/rest/api/trafficmanager/), [PowerShell cmdlets](/powershell/module/az.trafficmanager), and [cross-platform Azure CLI](/cli/azure/install-classic-cli).
-You can also use Azure Monitor to track the health of your endpoints and see a visual representation of them. For more about using Azure Monitor, see the [Azure Monitoring documentation](https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-overview-metrics).
+You can also use Azure Monitor to track the health of your endpoints and see a visual representation of them. For more about using Azure Monitor, see the [Azure Monitoring documentation](../azure-monitor/platform/data-platform.md).
### Can I monitor HTTPS endpoints?
@@ -455,7 +455,7 @@ The number of Traffic Manager health checks reaching your endpoint depends on th
### How can I get notified if one of my endpoints goes down?
-One of the metrics provided by Traffic Manager is the health status of endpoints in a profile. You can see this as an aggregate of all endpoints inside a profile (for example, 75% of your endpoints are healthy), or, at a per endpoint level. Traffic Manager metrics are exposed through Azure Monitor and you can use its [alerting capabilities](../monitoring-and-diagnostics/monitor-alerts-unified-usage.md) to get notifications when there is a change in the health status of your endpoint. For more details, see [Traffic Manager metrics and alerts](traffic-manager-metrics-alerts.md).
+One of the metrics provided by Traffic Manager is the health status of endpoints in a profile. You can see this as an aggregate of all endpoints inside a profile (for example, 75% of your endpoints are healthy), or, at a per endpoint level. Traffic Manager metrics are exposed through Azure Monitor and you can use its [alerting capabilities](../azure-monitor/platform/alerts-metric.md) to get notifications when there is a change in the health status of your endpoint. For more details, see [Traffic Manager metrics and alerts](traffic-manager-metrics-alerts.md).
## Traffic Manager nested profiles
@@ -505,4 +505,4 @@ The following table describes the behavior of Traffic Manager health checks for
## Next steps: - Learn more about Traffic Manager [endpoint monitoring and automatic failover](../traffic-manager/traffic-manager-monitoring.md).-- Learn more about Traffic Manager [traffic routing methods](../traffic-manager/traffic-manager-routing-methods.md).
+- Learn more about Traffic Manager [traffic routing methods](../traffic-manager/traffic-manager-routing-methods.md).
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-create-rum-visual-studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-create-rum-visual-studio.md
@@ -49,7 +49,7 @@ To obtain the RUM Key using Azure portal using the following procedure:
## Step 2: Instrument your app with the RUM package of Mobile Center SDK If you're new to Visual Studio Mobile Center, visit its [website](https://mobile.azure.com). For detailed instructions on SDK integration, see
-[Getting Started with the Android SDK](https://docs.microsoft.com/mobile-center/sdk/getting-started/Android).
+[Getting Started with the Android SDK](/mobile-center/sdk/getting-started/Android).
To use Real User Measurements, complete the following procedure:
@@ -95,8 +95,7 @@ To use Real User Measurements, complete the following procedure:
## Next steps - Learn more about [Real User Measurements](traffic-manager-rum-overview.md) - Learn [how Traffic Manager works](traffic-manager-overview.md)-- Learn more about [Mobile Center](https://docs.microsoft.com/mobile-center/)
+- Learn more about [Mobile Center](/mobile-center/)
- [Sign up](https://mobile.azure.com) for Mobile Center - Learn more about the [traffic-routing methods](traffic-manager-routing-methods.md) supported by Traffic Manager-- Learn how to [create a Traffic Manager profile](traffic-manager-create-profile.md)-
+- Learn how to [create a Traffic Manager profile](./quickstart-create-traffic-manager-profile.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-create-rum-web-pages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-create-rum-web-pages.md
@@ -63,5 +63,4 @@ After you have obtained the RUM key, the next step is to embed this copied JavaS
- Learn more about [Real User Measurements](traffic-manager-rum-overview.md) - Learn [how Traffic Manager works](traffic-manager-overview.md) - Learn more about the [traffic-routing methods](traffic-manager-routing-methods.md) supported by Traffic Manager-- Learn how to [create a Traffic Manager profile](traffic-manager-create-profile.md)-
+- Learn how to [create a Traffic Manager profile](./quickstart-create-traffic-manager-profile.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-diagnostic-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-diagnostic-logs.md
@@ -36,14 +36,14 @@ If you run PowerShell from your computer, you need the Azure PowerShell module,
2. **Enable resource logging for the Traffic Manager profile:**
- Enable resource logging for the Traffic Manager profile using the ID obtained in the previous step with [Set-AzDiagnosticSetting](https://docs.microsoft.com/powershell/module/az.monitor/set-azdiagnosticsetting?view=latest). The following command stores verbose logs for the Traffic Manager profile to a specified Azure Storage account.
+ Enable resource logging for the Traffic Manager profile using the ID obtained in the previous step with [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting?view=latest). The following command stores verbose logs for the Traffic Manager profile to a specified Azure Storage account.
```azurepowershell-interactive Set-AzDiagnosticSetting -ResourceId <TrafficManagerprofileResourceId> -StorageAccountId <storageAccountId> -Enabled $true ``` 3. **Verify diagnostic settings:**
- Verify diagnostic settings for the Traffic Manager profile using [Get-AzDiagnosticSetting](https://docs.microsoft.com/powershell/module/az.monitor/get-azdiagnosticsetting?view=latest). The following command displays the categories that are logged for a resource.
+ Verify diagnostic settings for the Traffic Manager profile using [Get-AzDiagnosticSetting](/powershell/module/az.monitor/get-azdiagnosticsetting?view=latest). The following command displays the categories that are logged for a resource.
```azurepowershell-interactive Get-AzDiagnosticSetting -ResourceId <TrafficManagerprofileResourceId>
@@ -62,7 +62,7 @@ If you run PowerShell from your computer, you need the Azure PowerShell module,
## Traffic Manager log schema All resource logs available through Azure Monitor share a common top-level schema, with flexibility for each service to emit unique properties for their own events.
-For top-level resource logs schema, see [Supported services, schemas, and categories for Azure Resource Logs](../azure-monitor/platform/tutorial-dashboards.md).
+For top-level resource logs schema, see [Supported services, schemas, and categories for Azure Resource Logs](../azure-monitor/platform/resource-logs-schema.md).
The following table includes logs schema specific to the Azure Traffic Manager profile resource.
@@ -74,5 +74,4 @@ The following table includes logs schema specific to the Azure Traffic Manager p
## Next steps
-* Learn more about [Traffic Manager Monitoring](traffic-manager-monitoring.md)
-
+* Learn more about [Traffic Manager Monitoring](traffic-manager-monitoring.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-endpoint-types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-endpoint-types.md
@@ -88,18 +88,18 @@ If all endpoints in a profile are disabled, or if the profile itself is disabled
## FAQs
-* [Can I use Traffic Manager with endpoints from multiple subscriptions?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-traffic-manager-with-endpoints-from-multiple-subscriptions)
+* [Can I use Traffic Manager with endpoints from multiple subscriptions?](./traffic-manager-faqs.md#can-i-use-traffic-manager-with-endpoints-from-multiple-subscriptions)
-* [Can I use Traffic Manager with Cloud Service 'Staging' slots?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-traffic-manager-with-cloud-service-staging-slots)
+* [Can I use Traffic Manager with Cloud Service 'Staging' slots?](./traffic-manager-faqs.md#can-i-use-traffic-manager-with-cloud-service-staging-slots)
-* [Does Traffic Manager support IPv6 endpoints?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#does-traffic-manager-support-ipv6-endpoints)
+* [Does Traffic Manager support IPv6 endpoints?](./traffic-manager-faqs.md#does-traffic-manager-support-ipv6-endpoints)
-* [Can I use Traffic Manager with more than one Web App in the same region?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-traffic-manager-with-more-than-one-web-app-in-the-same-region)
+* [Can I use Traffic Manager with more than one Web App in the same region?](./traffic-manager-faqs.md#can-i-use-traffic-manager-with-more-than-one-web-app-in-the-same-region)
-* [How do I move my Traffic Manager profileΓÇÖs Azure endpoints to a different resource group?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-do-i-move-my-traffic-manager-profiles-azure-endpoints-to-a-different-resource-group-or-subscription)
+* [How do I move my Traffic Manager profileΓÇÖs Azure endpoints to a different resource group?](./traffic-manager-faqs.md#how-do-i-move-my-traffic-manager-profiles-azure-endpoints-to-a-different-resource-group-or-subscription)
## Next steps * Learn [how Traffic Manager works](traffic-manager-how-it-works.md). * Learn about Traffic Manager [endpoint monitoring and automatic failover](traffic-manager-monitoring.md).
-* Learn about Traffic Manager [traffic routing methods](traffic-manager-routing-methods.md).
+* Learn about Traffic Manager [traffic routing methods](traffic-manager-routing-methods.md).
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-geographic-regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-geographic-regions.md
@@ -16,7 +16,7 @@ ms.author: duau
# Country/Region hierarchy used by Azure Traffic Manager for geographic traffic routing method
-This article lists the countries and regions used by the **Geographic** traffic routing method in Azure Traffic Manager. You can also obtain this information programmatically by calling the [Azure Traffic Manager’s REST API](https://docs.microsoft.com/rest/api/trafficmanager/). 
+This article lists the countries and regions used by the **Geographic** traffic routing method in Azure Traffic Manager. You can also obtain this information programmatically by calling the [Azure Traffic Manager’s REST API](/rest/api/trafficmanager/). 
- WORLD(World)
@@ -685,4 +685,4 @@ This article lists the countries and regions used by the **Geographic** traffic
## Next steps -- Learn more about [Geographic traffic routing method in Azure Traffic Manager](traffic-manager-routing-methods.md#geographic).
+- Learn more about [Geographic traffic routing method in Azure Traffic Manager](traffic-manager-routing-methods.md#geographic).
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-how-it-works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-how-it-works.md
@@ -66,27 +66,27 @@ The recursive DNS service caches the DNS responses it receives. The DNS resolver
## FAQs
-* [What IP address does Traffic Manager use?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-ip-address-does-traffic-manager-use)
+* [What IP address does Traffic Manager use?](./traffic-manager-faqs.md#what-ip-address-does-traffic-manager-use)
-* [What types of traffic can be routed using Traffic Manager?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-types-of-traffic-can-be-routed-using-traffic-manager)
+* [What types of traffic can be routed using Traffic Manager?](./traffic-manager-faqs.md#what-types-of-traffic-can-be-routed-using-traffic-manager)
-* [Does Traffic Manager support "sticky" sessions?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#does-traffic-manager-support-sticky-sessions)
+* [Does Traffic Manager support "sticky" sessions?](./traffic-manager-faqs.md#does-traffic-manager-support-sticky-sessions)
-* [Why am I seeing an HTTP error when using Traffic Manager?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#why-am-i-seeing-an-http-error-when-using-traffic-manager)
+* [Why am I seeing an HTTP error when using Traffic Manager?](./traffic-manager-faqs.md#why-am-i-seeing-an-http-error-when-using-traffic-manager)
-* [What is the performance impact of using Traffic Manager?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-is-the-performance-impact-of-using-traffic-manager)
+* [What is the performance impact of using Traffic Manager?](./traffic-manager-faqs.md#what-is-the-performance-impact-of-using-traffic-manager)
-* [What application protocols can I use with Traffic Manager?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-application-protocols-can-i-use-with-traffic-manager)
+* [What application protocols can I use with Traffic Manager?](./traffic-manager-faqs.md#what-application-protocols-can-i-use-with-traffic-manager)
-* [Can I use Traffic Manager with a "naked" domain name?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-traffic-manager-with-a-naked-domain-name)
+* [Can I use Traffic Manager with a "naked" domain name?](./traffic-manager-faqs.md#can-i-use-traffic-manager-with-a-naked-domain-name)
-* [Does Traffic Manager consider the client subnet address when handling DNS queries?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#does-traffic-manager-consider-the-client-subnet-address-when-handling-dns-queries)
+* [Does Traffic Manager consider the client subnet address when handling DNS queries?](./traffic-manager-faqs.md#does-traffic-manager-consider-the-client-subnet-address-when-handling-dns-queries)
-* [What is DNS TTL and how does it impact my users?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-is-dns-ttl-and-how-does-it-impact-my-users)
+* [What is DNS TTL and how does it impact my users?](./traffic-manager-faqs.md#what-is-dns-ttl-and-how-does-it-impact-my-users)
-* [How high or low can I set the TTL for Traffic Manager responses?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-high-or-low-can-i-set-the-ttl-for-traffic-manager-responses)
+* [How high or low can I set the TTL for Traffic Manager responses?](./traffic-manager-faqs.md#how-high-or-low-can-i-set-the-ttl-for-traffic-manager-responses)
-* [How can I understand the volume of queries coming to my profile?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-can-i-understand-the-volume-of-queries-coming-to-my-profile)
+* [How can I understand the volume of queries coming to my profile?](./traffic-manager-faqs.md#how-can-i-understand-the-volume-of-queries-coming-to-my-profile)
## Next steps
@@ -96,5 +96,4 @@ Learn more about Traffic Manager [traffic routing methods](traffic-manager-routi
<!--Image references--> [1]: ./media/traffic-manager-how-traffic-manager-works/dns-configuration.png
-[2]: ./media/traffic-manager-how-traffic-manager-works/flow.png
-
+[2]: ./media/traffic-manager-how-traffic-manager-works/flow.png
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-load-balancing-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-load-balancing-azure.md
@@ -87,11 +87,11 @@ The following diagram shows the architecture of this scenario:
4. Define the virtual network, subnet, front-end IP, and listener configurations for the application gateway. In this scenario, the front-end IP address is **Public**, which allows it to be added as an endpoint to the Traffic Manager profile later on. 5. Configure the listener with one of the following options: * If you use HTTP, there is nothing to configure. Click **OK**.
- * If you use HTTPS, further configuration is required. Refer to [Create an application gateway](../application-gateway/application-gateway-create-gateway-portal.md), starting at step 9. When you have completed the configuration, click **OK**.
+ * If you use HTTPS, further configuration is required. Refer to [Create an application gateway](../application-gateway/quick-create-portal.md), starting at step 9. When you have completed the configuration, click **OK**.
#### Configure URL routing for application gateways
-When you choose a back-end pool, an application gateway that's configured with a path-based rule takes a path pattern of the request URL in addition to round-robin distribution. In this scenario, we are adding a path-based rule to direct any URL with "/images/\*" to the image server pool. For more information about configuring URL path-based routing for an application gateway, refer to [Create a path-based rule for an application gateway](../application-gateway/application-gateway-create-url-route-portal.md).
+When you choose a back-end pool, an application gateway that's configured with a path-based rule takes a path pattern of the request URL in addition to round-robin distribution. In this scenario, we are adding a path-based rule to direct any URL with "/images/\*" to the image server pool. For more information about configuring URL path-based routing for an application gateway, refer to [Create a path-based rule for an application gateway](../application-gateway/create-url-route-portal.md).
![Application Gateway web-tier diagram](./media/traffic-manager-load-balancing-azure/web-tier-diagram.png)
@@ -151,7 +151,7 @@ In this scenario, Load Balancer distributes connections from the web tier to the
If your high-availability database cluster is using SQL Server AlwaysOn, refer to [Configure one or more Always On Availability Group Listeners](../azure-sql/virtual-machines/windows/availability-group-listener-powershell-configure.md) for step-by-step instructions.
-For more information about configuring an internal load balancer, see [Create an Internal load balancer in the Azure portal](../load-balancer/load-balancer-get-started-ilb-arm-portal.md).
+For more information about configuring an internal load balancer, see [Create an Internal load balancer in the Azure portal](../load-balancer/quickstart-load-balancer-standard-internal-portal.md).
1. In the Azure portal, in the left pane, click **Create a resource** > **Networking** > **Load balancer**. 2. Choose a name for your load balancer.
@@ -205,5 +205,5 @@ Now we configure the IP address and load-balancer front-end port in the applicat
## Next steps * [Overview of Traffic Manager](traffic-manager-overview.md)
-* [Application Gateway overview](../application-gateway/application-gateway-introduction.md)
-* [Azure Load Balancer overview](../load-balancer/load-balancer-overview.md)
+* [Application Gateway overview](../application-gateway/overview.md)
+* [Azure Load Balancer overview](../load-balancer/load-balancer-overview.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-manage-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-manage-endpoints.md
@@ -40,7 +40,7 @@ You can also disable individual endpoints that are part of a Traffic Manager pro
7. When the addition of both endpoints is complete, they are displayed in the **Traffic Manager profile** blade along with their monitoring status as **Online**. > [!NOTE]
-> After you add or remove an endpoint from a profile using the *Failover* traffic routing method, the failover priority list may not be ordered the way you want. You can adjust the order of the Failover Priority List on the Configuration page. For more information, see [Configure Failover traffic routing](traffic-manager-configure-failover-routing-method.md).
+> After you add or remove an endpoint from a profile using the *Failover* traffic routing method, the failover priority list may not be ordered the way you want. You can adjust the order of the Failover Priority List on the Configuration page. For more information, see [Configure Failover traffic routing](./traffic-manager-configure-priority-routing-method.md).
## To disable an endpoint
@@ -72,8 +72,7 @@ You can also disable individual endpoints that are part of a Traffic Manager pro
## Next steps * [Manage Traffic Manager profiles](traffic-manager-manage-profiles.md)
-* [Configure routing methods](traffic-manager-configure-routing-method.md)
+* [Configure routing methods](./traffic-manager-configure-priority-routing-method.md)
* [Troubleshooting Traffic Manager degraded state](traffic-manager-troubleshooting-degraded.md) * [Traffic Manager performance considerations](traffic-manager-performance-considerations.md)
-* [Operations on Traffic Manager (REST API Reference)](https://go.microsoft.com/fwlink/p/?LinkID=313584)
-
+* [Operations on Traffic Manager (REST API Reference)](/previous-versions/azure/reference/hh758255(v=azure.100))
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-manage-profiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-manage-profiles.md
@@ -65,8 +65,8 @@ You can disable an existing profile so that Traffic Manager does not refer user
## Next steps
-* [Add an endpoint](traffic-manager-endpoints.md)
+* [Add an endpoint](./traffic-manager-manage-endpoints.md)
* [Configure Priority routing method](traffic-manager-configure-priority-routing-method.md) * [Configure Geographic routing method](traffic-manager-configure-geographic-routing-method.md) * [Configure Weighted routing method](traffic-manager-configure-weighted-routing-method.md)
-* [Configure Performance routing method](traffic-manager-configure-performance-routing-method.md)
+* [Configure Performance routing method](traffic-manager-configure-performance-routing-method.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-metrics-alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-metrics-alerts.md
@@ -53,11 +53,11 @@ This metric can be shown either as an aggregate value representing the status of
*Figure 4: Split view of endpoint status metrics*
-You can consume these metrics through [Azure Monitor service](../azure-monitor/platform/metrics-supported.md)ΓÇÖs portal, [REST API](https://docs.microsoft.com/rest/api/monitor/), [Azure CLI](https://docs.microsoft.com/cli/azure/monitor), and [Azure PowerShell](https://docs.microsoft.com/powershell/module/az.applicationinsights), or through the metrics section of Traffic ManagerΓÇÖs portal experience.
+You can consume these metrics through [Azure Monitor service](../azure-monitor/platform/metrics-supported.md)ΓÇÖs portal, [REST API](/rest/api/monitor/), [Azure CLI](/cli/azure/monitor), and [Azure PowerShell](/powershell/module/az.applicationinsights), or through the metrics section of Traffic ManagerΓÇÖs portal experience.
## Alerts on Traffic Manager metrics
-In addition to processing and displaying metrics from Traffic Manager, Azure Monitor enables customers to configure and receive alerts associated with these metrics. You can choose what conditions need to be met in these metrics for an alert to occur, how often those conditions need to be monitored, and how the alerts should be sent to you. For more information, see [Azure Monitor alerts documentation](../monitoring-and-diagnostics/monitor-alerts-unified-usage.md).
+In addition to processing and displaying metrics from Traffic Manager, Azure Monitor enables customers to configure and receive alerts associated with these metrics. You can choose what conditions need to be met in these metrics for an alert to occur, how often those conditions need to be monitored, and how the alerts should be sent to you. For more information, see [Azure Monitor alerts documentation](../azure-monitor/platform/alerts-metric.md).
## Next steps - Learn more about [Azure Monitor service](../azure-monitor/platform/metrics-supported.md)-- Learn how to [create a chart using Azure Monitor](../azure-monitor/platform/metrics-getting-started.md#create-your-first-metric-chart)
+- Learn how to [create a chart using Azure Monitor](../azure-monitor/platform/metrics-getting-started.md#create-your-first-metric-chart)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-monitoring.md
@@ -74,7 +74,7 @@ Endpoint monitor status is a Traffic Manager-generated value that shows the stat
For details about how endpoint monitor status is calculated for nested endpoints, see [nested Traffic Manager profiles](traffic-manager-nested-profiles.md). >[!NOTE]
-> A Stopped Endpoint monitor status can happen on App Service if your web application is not running in the Standard tier or above. For more information, see [Traffic Manager integration with App Service](/azure/app-service/web-sites-traffic-manager).
+> A Stopped Endpoint monitor status can happen on App Service if your web application is not running in the Standard tier or above. For more information, see [Traffic Manager integration with App Service](../app-service/web-sites-traffic-manager.md).
### Profile monitor status
@@ -150,43 +150,43 @@ For more information about troubleshooting failed health checks, see [Troublesho
## FAQs
-* [Is Traffic Manager resilient to Azure region failures?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#is-traffic-manager-resilient-to-azure-region-failures)
+* [Is Traffic Manager resilient to Azure region failures?](./traffic-manager-faqs.md#is-traffic-manager-resilient-to-azure-region-failures)
-* [How does the choice of resource group location affect Traffic Manager?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-does-the-choice-of-resource-group-location-affect-traffic-manager)
+* [How does the choice of resource group location affect Traffic Manager?](./traffic-manager-faqs.md#how-does-the-choice-of-resource-group-location-affect-traffic-manager)
-* [How do I determine the current health of each endpoint?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-do-i-determine-the-current-health-of-each-endpoint)
+* [How do I determine the current health of each endpoint?](./traffic-manager-faqs.md#how-do-i-determine-the-current-health-of-each-endpoint)
-* [Can I monitor HTTPS endpoints?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-monitor-https-endpoints)
+* [Can I monitor HTTPS endpoints?](./traffic-manager-faqs.md#can-i-monitor-https-endpoints)
-* [Do I use an IP address or a DNS name when adding an endpoint?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#do-i-use-an-ip-address-or-a-dns-name-when-adding-an-endpoint)
+* [Do I use an IP address or a DNS name when adding an endpoint?](./traffic-manager-faqs.md#do-i-use-an-ip-address-or-a-dns-name-when-adding-an-endpoint)
-* [What types of IP addresses can I use when adding an endpoint?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-types-of-ip-addresses-can-i-use-when-adding-an-endpoint)
+* [What types of IP addresses can I use when adding an endpoint?](./traffic-manager-faqs.md#what-types-of-ip-addresses-can-i-use-when-adding-an-endpoint)
-* [Can I use different endpoint addressing types within a single profile?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-different-endpoint-addressing-types-within-a-single-profile)
+* [Can I use different endpoint addressing types within a single profile?](./traffic-manager-faqs.md#can-i-use-different-endpoint-addressing-types-within-a-single-profile)
-* [What happens when an incoming queryΓÇÖs record type is different from the record type associated with the addressing type of the endpoints?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-happens-when-an-incoming-querys-record-type-is-different-from-the-record-type-associated-with-the-addressing-type-of-the-endpoints)
+* [What happens when an incoming queryΓÇÖs record type is different from the record type associated with the addressing type of the endpoints?](./traffic-manager-faqs.md#what-happens-when-an-incoming-querys-record-type-is-different-from-the-record-type-associated-with-the-addressing-type-of-the-endpoints)
-* [Can I use a profile with IPv4 / IPv6 addressed endpoints in a nested profile?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-a-profile-with-ipv4--ipv6-addressed-endpoints-in-a-nested-profile)
+* [Can I use a profile with IPv4 / IPv6 addressed endpoints in a nested profile?](./traffic-manager-faqs.md#can-i-use-a-profile-with-ipv4--ipv6-addressed-endpoints-in-a-nested-profile)
-* [I stopped an web application endpoint in my Traffic Manager profile but I am not receiving any traffic even after I restarted it. How can I fix this?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#i-stopped-an-web-application-endpoint-in-my-traffic-manager-profile-but-i-am-not-receiving-any-traffic-even-after-i-restarted-it-how-can-i-fix-this)
+* [I stopped an web application endpoint in my Traffic Manager profile but I am not receiving any traffic even after I restarted it. How can I fix this?](./traffic-manager-faqs.md#i-stopped-an-web-application-endpoint-in-my-traffic-manager-profile-but-i-am-not-receiving-any-traffic-even-after-i-restarted-it-how-can-i-fix-this)
-* [Can I use Traffic Manager even if my application does not have support for HTTP or HTTPS?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-traffic-manager-even-if-my-application-does-not-have-support-for-http-or-https)
+* [Can I use Traffic Manager even if my application does not have support for HTTP or HTTPS?](./traffic-manager-faqs.md#can-i-use-traffic-manager-even-if-my-application-does-not-have-support-for-http-or-https)
-* [What specific responses are required from the endpoint when using TCP monitoring?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-specific-responses-are-required-from-the-endpoint-when-using-tcp-monitoring)
+* [What specific responses are required from the endpoint when using TCP monitoring?](./traffic-manager-faqs.md#what-specific-responses-are-required-from-the-endpoint-when-using-tcp-monitoring)
-* [How fast does Traffic Manager move my users away from an unhealthy endpoint?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-fast-does-traffic-manager-move-my-users-away-from-an-unhealthy-endpoint)
+* [How fast does Traffic Manager move my users away from an unhealthy endpoint?](./traffic-manager-faqs.md#how-fast-does-traffic-manager-move-my-users-away-from-an-unhealthy-endpoint)
-* [How can I specify different monitoring settings for different endpoints in a profile?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-can-i-specify-different-monitoring-settings-for-different-endpoints-in-a-profile)
+* [How can I specify different monitoring settings for different endpoints in a profile?](./traffic-manager-faqs.md#how-can-i-specify-different-monitoring-settings-for-different-endpoints-in-a-profile)
-* [How can I assign HTTP headers to the Traffic Manager health checks to my endpoints?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-can-i-assign-http-headers-to-the-traffic-manager-health-checks-to-my-endpoints)
+* [How can I assign HTTP headers to the Traffic Manager health checks to my endpoints?](./traffic-manager-faqs.md#how-can-i-assign-http-headers-to-the-traffic-manager-health-checks-to-my-endpoints)
-* [What host header do endpoint health checks use?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-host-header-do-endpoint-health-checks-use)
+* [What host header do endpoint health checks use?](./traffic-manager-faqs.md#what-host-header-do-endpoint-health-checks-use)
-* [What are the IP addresses from which the health checks originate?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-are-the-ip-addresses-from-which-the-health-checks-originate)
+* [What are the IP addresses from which the health checks originate?](./traffic-manager-faqs.md#what-are-the-ip-addresses-from-which-the-health-checks-originate)
-* [How many health checks to my endpoint can I expect from Traffic Manager?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-many-health-checks-to-my-endpoint-can-i-expect-from-traffic-manager)
+* [How many health checks to my endpoint can I expect from Traffic Manager?](./traffic-manager-faqs.md#how-many-health-checks-to-my-endpoint-can-i-expect-from-traffic-manager)
-* [How can I get notified if one of my endpoints goes down?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-can-i-get-notified-if-one-of-my-endpoints-goes-down)
+* [How can I get notified if one of my endpoints goes down?](./traffic-manager-faqs.md#how-can-i-get-notified-if-one-of-my-endpoints-goes-down)
## Next steps
@@ -196,4 +196,4 @@ Learn more about the [traffic-routing methods](traffic-manager-routing-methods.m
Learn how to [create a Traffic Manager profile](traffic-manager-manage-profiles.md)
-[Troubleshoot Degraded status](traffic-manager-troubleshooting-degraded.md) on a Traffic Manager endpoint
+[Troubleshoot Degraded status](traffic-manager-troubleshooting-degraded.md) on a Traffic Manager endpoint
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-nested-profiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-nested-profiles.md
@@ -92,23 +92,23 @@ The monitoring settings in a Traffic Manager profile apply to all endpoints with
## FAQs
-* [How do I configure nested profiles?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#traffic-manager-nested-profiles)
+* [How do I configure nested profiles?](./traffic-manager-faqs.md#traffic-manager-nested-profiles)
-* [How many layers of nesting does Traffic Manger support?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-many-layers-of-nesting-does-traffic-manger-support)
+* [How many layers of nesting does Traffic Manger support?](./traffic-manager-faqs.md#how-many-layers-of-nesting-does-traffic-manger-support)
-* [Can I mix other endpoint types with nested child profiles, in the same Traffic Manager profile?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-mix-other-endpoint-types-with-nested-child-profiles-in-the-same-traffic-manager-profile)
+* [Can I mix other endpoint types with nested child profiles, in the same Traffic Manager profile?](./traffic-manager-faqs.md#can-i-mix-other-endpoint-types-with-nested-child-profiles-in-the-same-traffic-manager-profile)
-* [How does the billing model apply for Nested profiles?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-does-the-billing-model-apply-for-nested-profiles)
+* [How does the billing model apply for Nested profiles?](./traffic-manager-faqs.md#how-does-the-billing-model-apply-for-nested-profiles)
-* [Is there a performance impact for nested profiles?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#is-there-a-performance-impact-for-nested-profiles)
+* [Is there a performance impact for nested profiles?](./traffic-manager-faqs.md#is-there-a-performance-impact-for-nested-profiles)
-* [How does Traffic Manager compute the health of a nested endpoint in a parent profile?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-does-traffic-manager-compute-the-health-of-a-nested-endpoint-in-a-parent-profile)
+* [How does Traffic Manager compute the health of a nested endpoint in a parent profile?](./traffic-manager-faqs.md#how-does-traffic-manager-compute-the-health-of-a-nested-endpoint-in-a-parent-profile)
## Next steps Learn more about [Traffic Manager profiles](traffic-manager-overview.md)
-Learn how to [create a Traffic Manager profile](traffic-manager-create-profile.md)
+Learn how to [create a Traffic Manager profile](./quickstart-create-traffic-manager-profile.md)
<!--Image references--> [1]: ./media/traffic-manager-nested-profiles/figure-1.png
@@ -120,4 +120,4 @@ Learn how to [create a Traffic Manager profile](traffic-manager-create-profile.m
[7]: ./media/traffic-manager-nested-profiles/figure-7.png [8]: ./media/traffic-manager-nested-profiles/figure-8.png [9]: ./media/traffic-manager-nested-profiles/figure-9.png
-[10]: ./media/traffic-manager-nested-profiles/figure-10.png
+[10]: ./media/traffic-manager-nested-profiles/figure-10.png
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-overview.md
@@ -20,9 +20,9 @@ Azure Traffic Manager is a DNS-based traffic load balancer that enables you to d
Traffic Manager uses DNS to direct client requests to the most appropriate service endpoint based on a traffic-routing method and the health of the endpoints. An endpoint is any Internet-facing service hosted inside or outside of Azure. Traffic Manager provides a range of [traffic-routing methods](traffic-manager-routing-methods.md) and [endpoint monitoring options](traffic-manager-monitoring.md) to suit different application needs and automatic failover models. Traffic Manager is resilient to failure, including the failure of an entire Azure region. >[!NOTE]
-> Azure provides a suite of fully managed load-balancing solutions for your scenarios. If you are looking for Transport Layer Security (TLS) protocol termination ("SSL offload") or per-HTTP/HTTPS request, application-layer processing, review [Application Gateway](../application-gateway/application-gateway-introduction.md). If you are looking for regional load balancing, review [Load Balancer](../load-balancer/load-balancer-overview.md). Your end-to-end scenarios might benefit from combining these solutions as needed.
+> Azure provides a suite of fully managed load-balancing solutions for your scenarios. If you are looking for Transport Layer Security (TLS) protocol termination ("SSL offload") or per-HTTP/HTTPS request, application-layer processing, review [Application Gateway](../application-gateway/overview.md). If you are looking for regional load balancing, review [Load Balancer](../load-balancer/load-balancer-overview.md). Your end-to-end scenarios might benefit from combining these solutions as needed.
>
-> For an Azure load-balancing options comparison, see [Overview of load-balancing options in Azure](https://docs.microsoft.com/azure/architecture/guide/technology-choices/load-balancing-overview).
+> For an Azure load-balancing options comparison, see [Overview of load-balancing options in Azure](/azure/architecture/guide/technology-choices/load-balancing-overview).
Traffic Manager offers the following features:
@@ -53,10 +53,6 @@ For pricing information, see [Traffic Manager Pricing](https://azure.microsoft.c
## Next steps -- Learn how to [create a Traffic Manager profile](traffic-manager-create-profile.md).
+- Learn how to [create a Traffic Manager profile](./quickstart-create-traffic-manager-profile.md).
- Learn [how Traffic Manager Works](traffic-manager-how-it-works.md).-- View [frequently asked questions](traffic-manager-FAQs.md) about Traffic Manager.----
+- View [frequently asked questions](traffic-manager-FAQs.md) about Traffic Manager.
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-performance-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-performance-considerations.md
@@ -75,7 +75,6 @@ The tools on these sites measure DNS latencies and display the resolved IP addre
[Test your Traffic Manager settings](traffic-manager-testing-settings.md)
-[Operations on Traffic Manager (REST API Reference)](https://go.microsoft.com/fwlink/?LinkId=313584)
-
-[Azure Traffic Manager Cmdlets](https://docs.microsoft.com/powershell/module/az.trafficmanager)
+[Operations on Traffic Manager (REST API Reference)](/previous-versions/azure/reference/hh758255(v=azure.100))
+[Azure Traffic Manager Cmdlets](/powershell/module/az.trafficmanager)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-point-internet-domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-point-internet-domain.md
@@ -28,5 +28,5 @@ All traffic requests to *www\.contoso.com* get directed to *contoso.trafficmanag
## Next steps * [Traffic Manager routing methods](traffic-manager-routing-methods.md)
-* [Traffic Manager - Disable, enable or delete a profile](disable-enable-or-delete-a-profile.md)
-* [Traffic Manager - Disable or enable an endpoint](disable-or-enable-an-endpoint.md)
+* [Traffic Manager - Disable, enable or delete a profile](./traffic-manager-manage-profiles.md)
+* [Traffic Manager - Disable or enable an endpoint](./traffic-manager-manage-endpoints.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-routing-methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-routing-methods.md
@@ -120,36 +120,36 @@ As explained in [How Traffic Manager Works](traffic-manager-how-it-works.md), Tr
### FAQs
-* [What are some use cases where geographic routing is useful?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-are-some-use-cases-where-geographic-routing-is-useful)
+* [What are some use cases where geographic routing is useful?](./traffic-manager-faqs.md#what-are-some-use-cases-where-geographic-routing-is-useful)
-* [How do I decide if I should use Performance routing method or Geographic routing method?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-do-i-decide-if-i-should-use-performance-routing-method-or-geographic-routing-method)
+* [How do I decide if I should use Performance routing method or Geographic routing method?](./traffic-manager-faqs.md#how-do-i-decide-if-i-should-use-performance-routing-method-or-geographic-routing-method)
-* [What are the regions that are supported by Traffic Manager for geographic routing?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-are-the-regions-that-are-supported-by-traffic-manager-for-geographic-routing)
+* [What are the regions that are supported by Traffic Manager for geographic routing?](./traffic-manager-faqs.md#what-are-the-regions-that-are-supported-by-traffic-manager-for-geographic-routing)
-* [How does traffic manager determine where a user is querying from?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-does-traffic-manager-determine-where-a-user-is-querying-from)
+* [How does traffic manager determine where a user is querying from?](./traffic-manager-faqs.md#how-does-traffic-manager-determine-where-a-user-is-querying-from)
-* [Is it guaranteed that Traffic Manager can correctly determine the exact geographic location of the user in every case?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#is-it-guaranteed-that-traffic-manager-can-correctly-determine-the-exact-geographic-location-of-the-user-in-every-case)
+* [Is it guaranteed that Traffic Manager can correctly determine the exact geographic location of the user in every case?](./traffic-manager-faqs.md#is-it-guaranteed-that-traffic-manager-can-correctly-determine-the-exact-geographic-location-of-the-user-in-every-case)
-* [Does an endpoint need to be physically located in the same region as the one it is configured with for geographic routing?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#does-an-endpoint-need-to-be-physically-located-in-the-same-region-as-the-one-it-is-configured-with-for-geographic-routing)
+* [Does an endpoint need to be physically located in the same region as the one it is configured with for geographic routing?](./traffic-manager-faqs.md#does-an-endpoint-need-to-be-physically-located-in-the-same-region-as-the-one-it-is-configured-with-for-geographic-routing)
-* [Can I assign geographic regions to endpoints in a profile that is not configured to do geographic routing?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-assign-geographic-regions-to-endpoints-in-a-profile-that-is-not-configured-to-do-geographic-routing)
+* [Can I assign geographic regions to endpoints in a profile that is not configured to do geographic routing?](./traffic-manager-faqs.md#can-i-assign-geographic-regions-to-endpoints-in-a-profile-that-is-not-configured-to-do-geographic-routing)
-* [Why am I getting an error when I try to change the routing method of an existing profile to Geographic?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#why-am-i-getting-an-error-when-i-try-to-change-the-routing-method-of-an-existing-profile-to-geographic)
+* [Why am I getting an error when I try to change the routing method of an existing profile to Geographic?](./traffic-manager-faqs.md#why-am-i-getting-an-error-when-i-try-to-change-the-routing-method-of-an-existing-profile-to-geographic)
-* [Why is it strongly recommended that customers create nested profiles instead of endpoints under a profile with geographic routing enabled?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#why-is-it-strongly-recommended-that-customers-create-nested-profiles-instead-of-endpoints-under-a-profile-with-geographic-routing-enabled)
+* [Why is it strongly recommended that customers create nested profiles instead of endpoints under a profile with geographic routing enabled?](./traffic-manager-faqs.md#why-is-it-strongly-recommended-that-customers-create-nested-profiles-instead-of-endpoints-under-a-profile-with-geographic-routing-enabled)
-* [Are there any restrictions on the API version that supports this routing type?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#are-there-any-restrictions-on-the-api-version-that-supports-this-routing-type)
+* [Are there any restrictions on the API version that supports this routing type?](./traffic-manager-faqs.md#are-there-any-restrictions-on-the-api-version-that-supports-this-routing-type)
## <a name = "multivalue"></a>Multivalue traffic-routing method The **Multivalue** traffic-routing method allows you to get multiple healthy endpoints in a single DNS query response. This enables the caller to do client-side retries with other endpoints in the event of a returned endpoint being unresponsive. This pattern can increase the availability of a service and reduce the latency associated with a new DNS query to obtain a healthy endpoint. MultiValue routing method works only if all the endpoints of type ΓÇÿExternalΓÇÖ and are specified as IPv4 or IPv6 addresses. When a query is received for this profile, all healthy endpoints are returned and are subject to a configurable maximum return count. ### FAQs
-* [What are some use cases where MultiValue routing is useful?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-are-some-use-cases-where-multivalue-routing-is-useful)
+* [What are some use cases where MultiValue routing is useful?](./traffic-manager-faqs.md#what-are-some-use-cases-where-multivalue-routing-is-useful)
-* [How many endpoints are returned when MultiValue routing is used?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-many-endpoints-are-returned-when-multivalue-routing-is-used)
+* [How many endpoints are returned when MultiValue routing is used?](./traffic-manager-faqs.md#how-many-endpoints-are-returned-when-multivalue-routing-is-used)
-* [Will I get the same set of endpoints when MultiValue routing is used?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#will-i-get-the-same-set-of-endpoints-when-multivalue-routing-is-used)
+* [Will I get the same set of endpoints when MultiValue routing is used?](./traffic-manager-faqs.md#will-i-get-the-same-set-of-endpoints-when-multivalue-routing-is-used)
## <a name = "subnet"></a>Subnet traffic-routing method The **Subnet** traffic-routing method allows you to map a set of end user IP address ranges to specific endpoints in a profile. After that, if Traffic Manager receives a DNS query for that profile, it will inspect the source IP address of that request (in most cases this will be the outgoing IP address of the DNS resolver used by the caller), determine which endpoint it is mapped to and will return that endpoint in the query response.
@@ -161,21 +161,17 @@ Subnet routing can be used to deliver a different experience for users connectin
### FAQs
-* [What are some use cases where subnet routing is useful?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-are-some-use-cases-where-subnet-routing-is-useful)
+* [What are some use cases where subnet routing is useful?](./traffic-manager-faqs.md#what-are-some-use-cases-where-subnet-routing-is-useful)
-* [How does Traffic Manager know the IP address of the end user?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-does-traffic-manager-know-the-ip-address-of-the-end-user)
+* [How does Traffic Manager know the IP address of the end user?](./traffic-manager-faqs.md#how-does-traffic-manager-know-the-ip-address-of-the-end-user)
-* [How can I specify IP addresses when using Subnet routing?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-can-i-specify-ip-addresses-when-using-subnet-routing)
+* [How can I specify IP addresses when using Subnet routing?](./traffic-manager-faqs.md#how-can-i-specify-ip-addresses-when-using-subnet-routing)
-* [How can I specify a fallback endpoint when using Subnet routing?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-can-i-specify-a-fallback-endpoint-when-using-subnet-routing)
+* [How can I specify a fallback endpoint when using Subnet routing?](./traffic-manager-faqs.md#how-can-i-specify-a-fallback-endpoint-when-using-subnet-routing)
-* [What happens if an endpoint is disabled in a Subnet routing type profile?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-happens-if-an-endpoint-is-disabled-in-a-subnet-routing-type-profile)
+* [What happens if an endpoint is disabled in a Subnet routing type profile?](./traffic-manager-faqs.md#what-happens-if-an-endpoint-is-disabled-in-a-subnet-routing-type-profile)
## Next steps
-Learn how to develop high-availability applications using [Traffic Manager endpoint monitoring](traffic-manager-monitoring.md)
----
+Learn how to develop high-availability applications using [Traffic Manager endpoint monitoring](traffic-manager-monitoring.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-rum-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-rum-overview.md
@@ -32,48 +32,47 @@ When you use Real User Measurements, you are billed based on the number of measu
## FAQs
-* [What are the benefits of using Real User Measurements?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-are-the-benefits-of-using-real-user-measurements)
+* [What are the benefits of using Real User Measurements?](./traffic-manager-faqs.md#what-are-the-benefits-of-using-real-user-measurements)
-* [Can I use Real User Measurements with non-Azure regions?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-real-user-measurements-with-non-azure-regions)
+* [Can I use Real User Measurements with non-Azure regions?](./traffic-manager-faqs.md#can-i-use-real-user-measurements-with-non-azure-regions)
-* [Which routing method benefits from Real User Measurements?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#which-routing-method-benefits-from-real-user-measurements)
+* [Which routing method benefits from Real User Measurements?](./traffic-manager-faqs.md#which-routing-method-benefits-from-real-user-measurements)
-* [Do I need to enable Real User Measurements each profile separately?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#do-i-need-to-enable-real-user-measurements-each-profile-separately)
+* [Do I need to enable Real User Measurements each profile separately?](./traffic-manager-faqs.md#do-i-need-to-enable-real-user-measurements-each-profile-separately)
-* [How do I turn off Real User Measurements for my subscription?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-do-i-turn-off-real-user-measurements-for-my-subscription)
+* [How do I turn off Real User Measurements for my subscription?](./traffic-manager-faqs.md#how-do-i-turn-off-real-user-measurements-for-my-subscription)
-* [Can I use Real User Measurements with client applications other than web pages?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-real-user-measurements-with-client-applications-other-than-web-pages)
+* [Can I use Real User Measurements with client applications other than web pages?](./traffic-manager-faqs.md#can-i-use-real-user-measurements-with-client-applications-other-than-web-pages)
-* [How many measurements are made each time my Real User Measurements enabled web page is rendered?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-many-measurements-are-made-each-time-my-real-user-measurements-enabled-web-page-is-rendered)
+* [How many measurements are made each time my Real User Measurements enabled web page is rendered?](./traffic-manager-faqs.md#how-many-measurements-are-made-each-time-my-real-user-measurements-enabled-web-page-is-rendered)
-* [Is there a delay before Real User Measurements script runs in my webpage?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#is-there-a-delay-before-real-user-measurements-script-runs-in-my-webpage)
+* [Is there a delay before Real User Measurements script runs in my webpage?](./traffic-manager-faqs.md#is-there-a-delay-before-real-user-measurements-script-runs-in-my-webpage)
-* [Can I use Real User Measurements with only the Azure regions I want to measure?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-use-real-user-measurements-with-only-the-azure-regions-i-want-to-measure)
+* [Can I use Real User Measurements with only the Azure regions I want to measure?](./traffic-manager-faqs.md#can-i-use-real-user-measurements-with-only-the-azure-regions-i-want-to-measure)
-* [Can I limit the number of measurements made to a specific number?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-limit-the-number-of-measurements-made-to-a-specific-number)
+* [Can I limit the number of measurements made to a specific number?](./traffic-manager-faqs.md#can-i-limit-the-number-of-measurements-made-to-a-specific-number)
-* [Can I see the measurements taken by my client application as part of Real User Measurements?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-see-the-measurements-taken-by-my-client-application-as-part-of-real-user-measurements)
+* [Can I see the measurements taken by my client application as part of Real User Measurements?](./traffic-manager-faqs.md#can-i-see-the-measurements-taken-by-my-client-application-as-part-of-real-user-measurements)
-* [Can I modify the measurement script provided by Traffic Manager?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-i-modify-the-measurement-script-provided-by-traffic-manager)
+* [Can I modify the measurement script provided by Traffic Manager?](./traffic-manager-faqs.md#can-i-modify-the-measurement-script-provided-by-traffic-manager)
-* [Will it be possible for others to see the key I use with Real User Measurements?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#will-it-be-possible-for-others-to-see-the-key-i-use-with-real-user-measurements)
+* [Will it be possible for others to see the key I use with Real User Measurements?](./traffic-manager-faqs.md#will-it-be-possible-for-others-to-see-the-key-i-use-with-real-user-measurements)
-* [Can others abuse my RUM key?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-others-abuse-my-rum-key)
+* [Can others abuse my RUM key?](./traffic-manager-faqs.md#can-others-abuse-my-rum-key)
-* [Do I need to put the measurement JavaScript in all my web pages?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#do-i-need-to-put-the-measurement-javascript-in-all-my-web-pages)
+* [Do I need to put the measurement JavaScript in all my web pages?](./traffic-manager-faqs.md#do-i-need-to-put-the-measurement-javascript-in-all-my-web-pages)
-* [Can information about my end users be identified by Traffic Manager if I use Real User Measurements?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#can-information-about-my-end-users-be-identified-by-traffic-manager-if-i-use-real-user-measurements)
+* [Can information about my end users be identified by Traffic Manager if I use Real User Measurements?](./traffic-manager-faqs.md#can-information-about-my-end-users-be-identified-by-traffic-manager-if-i-use-real-user-measurements)
-* [Does the webpage measuring Real User Measurements need to be using Traffic Manager for routing?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#does-the-webpage-measuring-real-user-measurements-need-to-be-using-traffic-manager-for-routing)
+* [Does the webpage measuring Real User Measurements need to be using Traffic Manager for routing?](./traffic-manager-faqs.md#does-the-webpage-measuring-real-user-measurements-need-to-be-using-traffic-manager-for-routing)
-* [Do I need to host any service on Azure regions to use with Real User Measurements?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#do-i-need-to-host-any-service-on-azure-regions-to-use-with-real-user-measurements)
+* [Do I need to host any service on Azure regions to use with Real User Measurements?](./traffic-manager-faqs.md#do-i-need-to-host-any-service-on-azure-regions-to-use-with-real-user-measurements)
-* [Will my Azure bandwidth usage increase when I use Real User Measurements?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#will-my-azure-bandwidth-usage-increase-when-i-use-real-user-measurements)
+* [Will my Azure bandwidth usage increase when I use Real User Measurements?](./traffic-manager-faqs.md#will-my-azure-bandwidth-usage-increase-when-i-use-real-user-measurements)
## Next steps - Learn how to use [Real User Measurements with web pages](traffic-manager-create-rum-web-pages.md) - Learn [how Traffic Manager works](traffic-manager-overview.md)-- Learn more about [Mobile Center](https://docs.microsoft.com/mobile-center/)
+- Learn more about [Mobile Center](/mobile-center/)
- Learn more about the [traffic-routing methods](traffic-manager-routing-methods.md) supported by Traffic Manager-- Learn how to [create a Traffic Manager profile](traffic-manager-create-profile.md)-
+- Learn how to [create a Traffic Manager profile](./quickstart-create-traffic-manager-profile.md)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-subnet-override-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-subnet-override-cli.md
@@ -35,7 +35,7 @@ To create a Traffic Manager subnet override, you can use Azure CLI to add the su
- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Update the Traffic Manager endpoint with subnet override.
-Use Azure CLI to update your endpoint with [az network traffic-manager endpoint update](https://docs.microsoft.com/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-update).
+Use Azure CLI to update your endpoint with [az network traffic-manager endpoint update](/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-update).
```azurecli-interactive ### Add a range of IPs ###
@@ -55,7 +55,7 @@ az network traffic-manager endpoint update \
--type AzureEndpoints ```
-You can remove the IP address ranges by running the [az network traffic-manager endpoint update](https://docs.microsoft.com/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-update) with the **--remove** option.
+You can remove the IP address ranges by running the [az network traffic-manager endpoint update](/cli/azure/network/traffic-manager/endpoint?view=azure-cli-latest#az-network-traffic-manager-endpoint-update) with the **--remove** option.
```azurecli-interactive az network traffic-manager endpoint update \
@@ -70,4 +70,4 @@ az network traffic-manager endpoint update \
Learn more about Traffic Manager [traffic routing methods](traffic-manager-routing-methods.md).
-Learn about the [Subnet traffic-routing method](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-routing-methods#subnet-traffic-routing-method)
+Learn about the [Subnet traffic-routing method](./traffic-manager-routing-methods.md#subnet-traffic-routing-method)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-subnet-override-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-subnet-override-powershell.md
@@ -40,7 +40,7 @@ If you run PowerShell from your computer, you need the Azure PowerShell module,
1. **Retrieve the Traffic Manager endpoint:**
- To enable the subnet override, retrieve the endpoint you wish to add the override to and store it in a variable using [Get-AzTrafficManagerEndpoint](https://docs.microsoft.com/powershell/module/az.trafficmanager/get-aztrafficmanagerendpoint?view=azps-2.5.0).
+ To enable the subnet override, retrieve the endpoint you wish to add the override to and store it in a variable using [Get-AzTrafficManagerEndpoint](/powershell/module/az.trafficmanager/get-aztrafficmanagerendpoint?view=azps-2.5.0).
Replace the Name, ProfileName, and ResourceGroupName with the values of the endpoint that you're changing.
@@ -51,7 +51,7 @@ If you run PowerShell from your computer, you need the Azure PowerShell module,
``` 2. **Add the IP address range to the endpoint:**
- To add the IP address range to the endpoint, you'll use [Add-AzTrafficManagerIpAddressRange](https://docs.microsoft.com/powershell/module/az.trafficmanager/add-aztrafficmanageripaddressrange?view=azps-2.5.0&viewFallbackFrom=azps-2.4.0) to add the range.
+ To add the IP address range to the endpoint, you'll use [Add-AzTrafficManagerIpAddressRange](/powershell/module/az.trafficmanager/add-aztrafficmanageripaddressrange?view=azps-2.5.0&viewFallbackFrom=azps-2.4.0) to add the range.
```powershell
@@ -65,18 +65,18 @@ If you run PowerShell from your computer, you need the Azure PowerShell module,
Add-AzTrafficManagerIPAddressRange -TrafficManagerEndpoint $TrafficManagerEndpoint -First "12.13.14.0" -Last "12.13.14.31" -Scope 27 ```
- Once the ranges are added, use [Set-AzTrafficManagerEndpoint](https://docs.microsoft.com/powershell/module/az.trafficmanager/set-aztrafficmanagerendpoint?view=azps-2.5.0) to update the endpoint.
+ Once the ranges are added, use [Set-AzTrafficManagerEndpoint](/powershell/module/az.trafficmanager/set-aztrafficmanagerendpoint?view=azps-2.5.0) to update the endpoint.
```powershell Set-AzTrafficManagerEndpoint -TrafficManagerEndpoint $TrafficManagerEndpoint ```
-Removal of the IP address range can be completed by using [Remove-AzTrafficManagerIpAddressRange](https://docs.microsoft.com/powershell/module/az.trafficmanager/remove-aztrafficmanageripaddressrange?view=azps-2.5.0).
+Removal of the IP address range can be completed by using [Remove-AzTrafficManagerIpAddressRange](/powershell/module/az.trafficmanager/remove-aztrafficmanageripaddressrange?view=azps-2.5.0).
1. **Retrieve the Traffic Manager endpoint:**
- To enable the subnet override, retrieve the endpoint you wish to add the override to and store it in a variable using [Get-AzTrafficManagerEndpoint](https://docs.microsoft.com/powershell/module/az.trafficmanager/get-aztrafficmanagerendpoint?view=azps-2.5.0).
+ To enable the subnet override, retrieve the endpoint you wish to add the override to and store it in a variable using [Get-AzTrafficManagerEndpoint](/powershell/module/az.trafficmanager/get-aztrafficmanagerendpoint?view=azps-2.5.0).
Replace the Name, ProfileName, and ResourceGroupName with the values of the endpoint that you're changing.
@@ -99,7 +99,7 @@ Removal of the IP address range can be completed by using [Remove-AzTrafficManag
Remove-AzTrafficManagerIpAddressRange -TrafficManagerEndpoint $TrafficManagerEndpoint -First "12.13.14.0" -Last "12.13.14.31" -Scope 27 ```
- Once the ranges are removed, use [Set-AzTrafficManagerEndpoint](https://docs.microsoft.com/powershell/module/az.trafficmanager/set-aztrafficmanagerendpoint?view=azps-2.5.0) to update the endpoint.
+ Once the ranges are removed, use [Set-AzTrafficManagerEndpoint](/powershell/module/az.trafficmanager/set-aztrafficmanagerendpoint?view=azps-2.5.0) to update the endpoint.
```powershell
@@ -110,4 +110,4 @@ Removal of the IP address range can be completed by using [Remove-AzTrafficManag
## Next steps Learn more about Traffic Manager [traffic routing methods](traffic-manager-routing-methods.md).
-Learn about the [Subnet traffic-routing method](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-routing-methods#subnet-traffic-routing-method)
+Learn about the [Subnet traffic-routing method](./traffic-manager-routing-methods.md#subnet-traffic-routing-method)
\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-traffic-view-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-traffic-view-overview.md
@@ -67,29 +67,29 @@ When you use Traffic View, you are billed based on the number of data points use
## FAQs
-* [What does Traffic View do?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#what-does-traffic-view-do)
+* [What does Traffic View do?](./traffic-manager-faqs.md#what-does-traffic-view-do)
-* [How can I benefit from using Traffic View?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-can-i-benefit-from-using-traffic-view)
+* [How can I benefit from using Traffic View?](./traffic-manager-faqs.md#how-can-i-benefit-from-using-traffic-view)
-* [How is Traffic View different from the Traffic Manager metrics available through Azure monitor?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-is-traffic-view-different-from-the-traffic-manager-metrics-available-through-azure-monitor)
+* [How is Traffic View different from the Traffic Manager metrics available through Azure monitor?](./traffic-manager-faqs.md#how-is-traffic-view-different-from-the-traffic-manager-metrics-available-through-azure-monitor)
-* [Does Traffic View use EDNS Client Subnet information?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#does-traffic-view-use-edns-client-subnet-information)
+* [Does Traffic View use EDNS Client Subnet information?](./traffic-manager-faqs.md#does-traffic-view-use-edns-client-subnet-information)
-* [How many days of data does Traffic View use?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-many-days-of-data-does-traffic-view-use)
+* [How many days of data does Traffic View use?](./traffic-manager-faqs.md#how-many-days-of-data-does-traffic-view-use)
-* [How does Traffic View handle external endpoints?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-does-traffic-view-handle-external-endpoints)
+* [How does Traffic View handle external endpoints?](./traffic-manager-faqs.md#how-does-traffic-view-handle-external-endpoints)
-* [Do I need to enable Traffic View for each profile in my subscription?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#do-i-need-to-enable-traffic-view-for-each-profile-in-my-subscription)
+* [Do I need to enable Traffic View for each profile in my subscription?](./traffic-manager-faqs.md#do-i-need-to-enable-traffic-view-for-each-profile-in-my-subscription)
-* [How can I turn off Traffic View?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-can-i-turn-off-traffic-view)
+* [How can I turn off Traffic View?](./traffic-manager-faqs.md#how-can-i-turn-off-traffic-view)
-* [How does Traffic View billing work?](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-faqs#how-does-traffic-view-billing-work)
+* [How does Traffic View billing work?](./traffic-manager-faqs.md#how-does-traffic-view-billing-work)
## Next steps - Learn [how Traffic Manager works](traffic-manager-overview.md) - Learn more about the [traffic-routing methods](traffic-manager-routing-methods.md) supported by Traffic Manager-- Learn how to [create a Traffic Manager profile](traffic-manager-create-profile.md)
+- Learn how to [create a Traffic Manager profile](./quickstart-create-traffic-manager-profile.md)
<!--Image references--> [1]: ./media/traffic-manager-traffic-view-overview/trafficview.png\ No newline at end of file
traffic-manager https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-troubleshooting-degraded https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-troubleshooting-degraded.md
@@ -16,7 +16,7 @@ ms.author: duau
# Troubleshooting degraded state on Azure Traffic Manager
-This article describes how to troubleshoot an Azure Traffic Manager profile that is showing a degraded status. As a first step in troubleshooting a Azure Traffic Manager degraded state is to enable logging. Refer to [Enable resource logs](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-diagnostic-logs) for more information. For this scenario, consider that you have configured a Traffic Manager profile pointing to some of your cloudapp.net hosted services. If the health of your Traffic Manager displays a **Degraded** status, then the status of one or more endpoints may be **Degraded**:
+This article describes how to troubleshoot an Azure Traffic Manager profile that is showing a degraded status. As a first step in troubleshooting a Azure Traffic Manager degraded state is to enable logging. Refer to [Enable resource logs](./traffic-manager-diagnostic-logs.md) for more information. For this scenario, consider that you have configured a Traffic Manager profile pointing to some of your cloudapp.net hosted services. If the health of your Traffic Manager displays a **Degraded** status, then the status of one or more endpoints may be **Degraded**:
![degraded endpoint status](./media/traffic-manager-troubleshooting-degraded/traffic-manager-degradedifonedegraded.png)
@@ -26,8 +26,8 @@ If the health of your Traffic Manager displays an **Inactive** status, then both
## Understanding Traffic Manager probes
-* Traffic Manager considers an endpoint to be ONLINE only when the probe receives an HTTP 200 response back from the probe path. If you application returns any other HTTP response code you should add that response code to [Expected status code ranges](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-monitoring#configure-endpoint-monitoring) of your Traffic Manager profile.
-* A 30x redirect response is treated as failure unless you have specified this as a valid response code in [Expected status code ranges](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-monitoring#configure-endpoint-monitoring) of your Traffic Manager profile. Traffic Manager does not probe the redirection target.
+* Traffic Manager considers an endpoint to be ONLINE only when the probe receives an HTTP 200 response back from the probe path. If you application returns any other HTTP response code you should add that response code to [Expected status code ranges](./traffic-manager-monitoring.md#configure-endpoint-monitoring) of your Traffic Manager profile.
+* A 30x redirect response is treated as failure unless you have specified this as a valid response code in [Expected status code ranges](./traffic-manager-monitoring.md#configure-endpoint-monitoring) of your Traffic Manager profile. Traffic Manager does not probe the redirection target.
* For HTTPs probes, certificate errors are ignored. * The actual content of the probe path doesn't matter, as long as a 200 is returned. Probing a URL to some static content like "/favicon.ico" is a common technique. Dynamic content, like the ASP pages, may not always return 200, even when the application is healthy. * A best practice is to set the probe path to something that has enough logic to determine that the site is up or down. In the previous example, by setting the path to "/favicon.ico", you are only testing that w3wp.exe is responding. This probe may not indicate that your web application is healthy. A better option would be to set a path to a something such as "/Probe.aspx" that has logic to determine the health of the site. For example, you could use performance counters to CPU utilization or measure the number of failed requests. Or you could attempt to access database resources or session state to make sure that the web application is working.
@@ -82,12 +82,12 @@ public class TrustAllCertsPolicy : ICertificatePolicy {
[What is Traffic Manager](traffic-manager-overview.md)
-[Cloud Services](https://go.microsoft.com/fwlink/?LinkId=314074)
+[Cloud Services](/previous-versions/azure/jj155995(v=azure.100))
[Azure App Service](https://azure.microsoft.com/documentation/services/app-service/web/)
-[Operations on Traffic Manager (REST API Reference)](https://go.microsoft.com/fwlink/?LinkId=313584)
+[Operations on Traffic Manager (REST API Reference)](/previous-versions/azure/reference/hh758255(v=azure.100))
[Azure Traffic Manager Cmdlets][1]
-[1]: https://docs.microsoft.com/powershell/module/az.trafficmanager
+[1]: /powershell/module/az.trafficmanager
\ No newline at end of file
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/app-attach-azure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/app-attach-azure-portal.md
@@ -48,19 +48,10 @@ reg add HKCU\Software\Microsoft\Windows\CurrentVersion\ContentDeliveryManager /v
reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\ContentDeliveryManager\Debug /v ContentDeliveryAllowedOverride /t REG_DWORD /d 0x2 /f
-rem Disable Windows Update:
-
-sc config wuauserv start=disabled
-```
-
-After you've disabled automatic updates, you must enable Hyper-V because you'll be using the `Mount-VHD` command to stage and and Dismount-VHD to destage.
-
-```powershell
-Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
``` >[!NOTE]
->This change will require that you restart the virtual machine.
+>We recommend that you restart the virtual machine after enabling Hyper-V.
## Configure the MSIX app attach management interface
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/app-attach https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/app-attach.md
@@ -34,6 +34,17 @@ If your app uses a certificate that isn't public-trusted or was self-signed, her
7. When the select certificate store window appears, select **Trusted people**, then select **OK**. 8. Select **Next** and **Finish**.
+## Enable Microsoft Hyper-V
+
+Microsoft Hyper-V must be enabled because the `Mount-VHD` command is needed to stage and `Dismount-VHD` is needed to destage.
+
+```powershell
+Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
+```
+
+>[!NOTE]
+>This change will require that you restart the virtual machine.
+ ## Prepare PowerShell scripts for MSIX app attach MSIX app attach has four distinct phases that must be performed in the following order:
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/linux-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/linux-overview.md
@@ -17,6 +17,7 @@ The following partners have approved Windows Virtual Desktop clients for Linux d
|Partner|Partner documentation|Partner support| |:------|:--------------------|:--------------|
+|![Dell logo](./media/partners/dell.png)|[Dell client documentation](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/thin-clients/dell-thinos-9-for-microsoft-wvd.pdf)|[Dell support](https://www.dell.com/support)|
|![IGEL logo](./media/partners/igel.png)|[IGEL client documentation](https://www.igel.com/igel-solution-family/windows-virtual-desktop/)|[IGEL support](https://www.igel.com/support/)| |![NComputing logo](./media/partners/ncomputing.png)|[NComputing client documentation](https://www.ncomputing.com/microsoft)|[NComputing support](https://www.ncomputing.com/support/support-options)| |![Stratodesk logo](./media/partners/stratodesk.png)|[Stratodesk client documentation](https://www.stratodesk.com/kb/Microsoft_Windows_Virtual_Desktop_(WVD))|[Stratodesk support](https://www.stratodesk.com/support/)|
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/azure-disk-enc-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/azure-disk-enc-windows.md
@@ -144,7 +144,7 @@ Using `AADClientCertificate`:
| (1.1 schema) AADClientID | xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | guid | | (1.1 schema) AADClientSecret | password | string | | (1.1 schema) AADClientCertificate | thumbprint | string |
-| EncryptionOperation | EnableEncryption, EnableEncryptionFormatAll | string |
+| EncryptionOperation | EnableEncryption | string |
| (optional - default RSA-OAEP ) KeyEncryptionAlgorithm | 'RSA-OAEP', 'RSA-OAEP-256', 'RSA1_5' | string | | KeyVaultURL | url | string | | KeyVaultResourceId | url | string |
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/dsc-template.md
@@ -39,7 +39,7 @@ For more information, see
{ "type": "Microsoft.Compute/virtualMachines/extensions", "name": "Microsoft.Powershell.DSC",
- "apiVersion": "2018-06-30",
+ "apiVersion": "2018-06-01",
"location": "[parameters('location')]", "dependsOn": [ "[concat('Microsoft.Compute/virtualMachines/', parameters('VMName'))]"
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/windows-desktop-multitenant-hosting-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/windows-desktop-multitenant-hosting-deployment.md
@@ -6,7 +6,7 @@ ms.service: virtual-machines-windows
ms.topic: how-to ms.workload: infrastructure-services ms.date: 1/24/2018
-ms.author: xujing
+ms.author: mimckitt
--- # How to deploy Windows 10 on Azure with Multitenant Hosting Rights
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/routing-preference-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/routing-preference-overview.md
@@ -70,7 +70,7 @@ The price difference between both options is reflected in the internet egress da
## Limitations
-* Routing preference is only compatible with standard SKU of public IP address. Basic SKU of public IP address is not supported.
+* Routing preference is only compatible with zone-redundant standard SKU of public IP address. Basic SKU of public IP address is not supported.
* Routing preference currently supports only IPv4 public IP addresses. IPv6 public IP addresses are not supported. * Virtual machines with multiple NICs can have only one type of routing preference.
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-networks-faq.md
@@ -387,6 +387,9 @@ When virtual network service endpoints are enabled, the source IP addresses of t
### Does the service endpoint route always take precedence? Service endpoints add a system route which takes precedence over BGP routes and provides optimum routing for the service endpoint traffic. Service endpoints always take service traffic directly from your virtual network to the service on the Microsoft Azure backbone network. For more information about how Azure selects a route, see [Azure Virtual network traffic routing](virtual-networks-udr-overview.md).+
+### Do service endpoints work with ICMP?
+No, ICMP traffic that is sourced from a subnet with service endpoints enabled will not take the service tunnel path to the desired endpoint. Service endpoints will only handle TCP traffic. This means that if you want to test latency or connectivity to an endpoint via service endpoints, tools like ping and tracert will not show the true path that the resources within the subnet will take.
### How does NSG on a subnet work with service endpoints? To reach the Azure service, NSGs need to allow outbound connectivity. If your NSGs are opened to all Internet outbound traffic, then the service endpoint traffic should work. You can also limit the outbound traffic to service IPs only using the Service tags.