Updates from: 04/19/2021 03:04:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-claims-mapping.md
> [!NOTE] > This feature replaces and supersedes the [claims customization](active-directory-saml-claims-customization.md) offered through the portal today. On the same application, if you customize claims using the portal in addition to the Graph/PowerShell method detailed in this document, tokens issued for that application will ignore the configuration in the portal. Configurations made through the methods detailed in this document will not be reflected in the portal.
+> [!NOTE]
+> This capability currently is in public preview. Be prepared to revert or remove any changes. The feature is available in any Azure Active Directory (Azure AD) subscription during public preview. However, when the feature becomes generally available, some aspects of the feature might require an Azure AD premium subscription. This feature supports configuring claim mapping policies for WS-Fed, SAML, OAuth, and OpenID Connect protocols.
+ This feature is used by tenant admins to customize the claims emitted in tokens for a specific application in their tenant. You can use claims-mapping policies to: - Select which claims are included in tokens. - Create claim types that do not already exist. - Choose or change the source of data emitted in specific claims.
-> [!NOTE]
-> This capability currently is in public preview. Be prepared to revert or remove any changes. The feature is available in any Azure Active Directory (Azure AD) subscription during public preview. However, when the feature becomes generally available, some aspects of the feature might require an Azure AD premium subscription. This feature supports configuring claim mapping policies for WS-Fed, SAML, OAuth, and OpenID Connect protocols.
-
-## Claims mapping policy type
-
-In Azure AD, a **Policy** object represents a set of rules enforced on individual applications or on all applications in an organization. Each type of policy has a unique structure, with a set of properties that are then applied to objects to which they are assigned.
-
-A claims mapping policy is a type of **Policy** object that modifies the claims emitted in tokens issued for specific applications.
-
-## Claim sets
-
-There are certain sets of claims that define how and when they're used in tokens.
-
-| Claim set | Description |
-|||
-| Core claim set | Are present in every token regardless of the policy. These claims are also considered restricted, and can't be modified. |
-| Basic claim set | Includes the claims that are emitted by default for tokens (in addition to the core claim set). You can omit or modify basic claims by using the claims mapping policies. |
-| Restricted claim set | Can't be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. |
-
-### Table 1: JSON Web Token (JWT) restricted claim set
-
-| Claim type (name) |
-| -- |
-| _claim_names |
-| _claim_sources |
-| access_token |
-| account_type |
-| acr |
-| actor |
-| actortoken |
-| aio |
-| altsecid |
-| amr |
-| app_chain |
-| app_displayname |
-| app_res |
-| appctx |
-| appctxsender |
-| appid |
-| appidacr |
-| assertion |
-| at_hash |
-| aud |
-| auth_data |
-| auth_time |
-| authorization_code |
-| azp |
-| azpacr |
-| c_hash |
-| ca_enf |
-| cc |
-| cert_token_use |
-| client_id |
-| cloud_graph_host_name |
-| cloud_instance_name |
-| cnf |
-| code |
-| controls |
-| credential_keys |
-| csr |
-| csr_type |
-| deviceid |
-| dns_names |
-| domain_dns_name |
-| domain_netbios_name |
-| e_exp |
-| email |
-| endpoint |
-| enfpolids |
-| exp |
-| expires_on |
-| grant_type |
-| graph |
-| group_sids |
-| groups |
-| hasgroups |
-| hash_alg |
-| home_oid |
-| `http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant` |
-| `http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod` |
-| `http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration` |
-| `http://schemas.microsoft.com/ws/2008/06/identity/claims/expired` |
-| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` |
-| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` |
-| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` |
-| iat |
-| identityprovider |
-| idp |
-| in_corp |
-| instance |
-| ipaddr |
-| isbrowserhostedapp |
-| iss |
-| jwk |
-| key_id |
-| key_type |
-| mam_compliance_url |
-| mam_enrollment_url |
-| mam_terms_of_use_url |
-| mdm_compliance_url |
-| mdm_enrollment_url |
-| mdm_terms_of_use_url |
-| nameid |
-| nbf |
-| netbios_name |
-| nonce |
-| oid |
-| on_prem_id |
-| onprem_sam_account_name |
-| onprem_sid |
-| openid2_id |
-| password |
-| polids |
-| pop_jwk |
-| preferred_username |
-| previous_refresh_token |
-| primary_sid |
-| puid |
-| pwd_exp |
-| pwd_url |
-| redirect_uri |
-| refresh_token |
-| refreshtoken |
-| request_nonce |
-| resource |
-| role |
-| roles |
-| scope |
-| scp |
-| sid |
-| signature |
-| signin_state |
-| src1 |
-| src2 |
-| sub |
-| tbid |
-| tenant_display_name |
-| tenant_region_scope |
-| thumbnail_photo |
-| tid |
-| tokenAutologonEnabled |
-| trustedfordelegation |
-| unique_name |
-| upn |
-| user_setting_sync_url |
-| username |
-| uti |
-| ver |
-| verified_primary_email |
-| verified_secondary_email |
-| wids |
-| win_ver |
-
-### Table 2: SAML restricted claim set
-
-| Claim type (URI) |
-| -- |
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/expired`|
-|`http://schemas.microsoft.com/identity/claims/accesstoken`|
-|`http://schemas.microsoft.com/identity/claims/openid2_id`|
-|`http://schemas.microsoft.com/identity/claims/identityprovider`|
-|`http://schemas.microsoft.com/identity/claims/objectidentifier`|
-|`http://schemas.microsoft.com/identity/claims/puid`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier [MR1]`|
-|`http://schemas.microsoft.com/identity/claims/tenantid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod`|
-|`http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/groups`|
-|`http://schemas.microsoft.com/claims/groups.link`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/role`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/wids`|
-|`http://schemas.microsoft.com/2014/09/devicecontext/claims/iscompliant`|
-|`http://schemas.microsoft.com/2014/02/devicecontext/claims/isknown`|
-|`http://schemas.microsoft.com/2012/01/devicecontext/claims/ismanaged`|
-|`http://schemas.microsoft.com/2014/03/psso`|
-|`http://schemas.microsoft.com/claims/authnmethodsreferences`|
-|`http://schemas.xmlsoap.org/ws/2009/09/identity/claims/actor`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/samlissuername`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/confirmationkey`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authorizationdecision`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authentication`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlyprimarygroupsid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlyprimarysid`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/denyonlysid`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlywindowsdevicegroup`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdeviceclaim`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdevicegroup`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsfqbnversion`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowssubauthority`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsuserclaim`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn`|
-|`http://schemas.microsoft.com/ws/2008/06/identity/claims/ispersistent`|
-|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/privatepersonalidentifier`|
-|`http://schemas.microsoft.com/identity/claims/scope`|
-
-## Claims mapping policy properties
-
-To control what claims are emitted and where the data comes from, use the properties of a claims mapping policy. If a policy is not set, the system issues tokens that include the core claim set, the basic claim set, and any [optional claims](active-directory-optional-claims.md) that the application has chosen to receive.
-
-> [!NOTE]
-> Claims in the core claim set are present in every token, regardless of what this property is set to.
-
-### Include basic claim set
-
-**String:** IncludeBasicClaimSet
-
-**Data type:** Boolean (True or False)
-
-**Summary:** This property determines whether the basic claim set is included in tokens affected by this policy.
--- If set to True, all claims in the basic claim set are emitted in tokens affected by the policy.-- If set to False, claims in the basic claim set are not in the tokens, unless they are individually added in the claims schema property of the same policy.---
-### Claims schema
-
-**String:** ClaimsSchema
-
-**Data type:** JSON blob with one or more claim schema entries
-
-**Summary:** This property defines which claims are present in the tokens affected by the policy, in addition to the basic claim set and the core claim set.
-For each claim schema entry defined in this property, certain information is required. Specify where the data is coming from (**Value**, **Source/ID pair**, or **Source/ExtensionID pair**), and which claim the data is emitted as (**Claim Type**).
-
-### Claim schema entry elements
-
-**Value:** The Value element defines a static value as the data to be emitted in the claim.
-
-**Source/ID pair:** The Source and ID elements define where the data in the claim is sourced from.
-
-**Source/ExtensionID pair:** The Source and ExtensionID elements define the directory schema extension attribute where the data in the claim is sourced from. For more information, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
-
-Set the Source element to one of the following values:
--- "user": The data in the claim is a property on the User object.-- "application": The data in the claim is a property on the application (client) service principal.-- "resource": The data in the claim is a property on the resource service principal.-- "audience": The data in the claim is a property on the service principal that is the audience of the token (either the client or resource service principal).-- "company": The data in the claim is a property on the resource tenant's Company object.-- "transformation": The data in the claim is from claims transformation (see the "Claims transformation" section later in this article).-
-If the source is transformation, the **TransformationID** element must be included in this claim definition as well.
-
-The ID element identifies which property on the source provides the value for the claim. The following table lists the values of ID valid for each value of Source.
-
-#### Table 3: Valid ID values per source
-
-| Source | ID | Description |
-|--|--|--|
-| User | surname | Family Name |
-| User | givenname | Given Name |
-| User | displayname | Display Name |
-| User | objectid | ObjectID |
-| User | mail | Email Address |
-| User | userprincipalname | User Principal Name |
-| User | department|Department|
-| User | onpremisessamaccountname | On-premises SAM Account Name |
-| User | netbiosname| NetBios Name |
-| User | dnsdomainname | DNS Domain Name |
-| User | onpremisesecurityidentifier | On-premises Security Identifier |
-| User | companyname| Organization Name |
-| User | streetaddress | Street Address |
-| User | postalcode | Postal Code |
-| User | preferredlanguage | Preferred Language |
-| User | onpremisesuserprincipalname | On-premises UPN |
-| User | mailnickname | Mail Nickname |
-| User | extensionattribute1 | Extension Attribute 1 |
-| User | extensionattribute2 | Extension Attribute 2 |
-| User | extensionattribute3 | Extension Attribute 3 |
-| User | extensionattribute4 | Extension Attribute 4 |
-| User | extensionattribute5 | Extension Attribute 5 |
-| User | extensionattribute6 | Extension Attribute 6 |
-| User | extensionattribute7 | Extension Attribute 7 |
-| User | extensionattribute8 | Extension Attribute 8 |
-| User | extensionattribute9 | Extension Attribute 9 |
-| User | extensionattribute10 | Extension Attribute 10 |
-| User | extensionattribute11 | Extension Attribute 11 |
-| User | extensionattribute12 | Extension Attribute 12 |
-| User | extensionattribute13 | Extension Attribute 13 |
-| User | extensionattribute14 | Extension Attribute 14 |
-| User | extensionattribute15 | Extension Attribute 15 |
-| User | othermail | Other Mail |
-| User | country | Country/Region |
-| User | city | City |
-| User | state | State |
-| User | jobtitle | Job Title |
-| User | employeeid | Employee ID |
-| User | facsimiletelephonenumber | Facsimile Telephone Number |
-| User | assignedroles | list of App roles assigned to user|
-| application, resource, audience | displayname | Display Name |
-| application, resource, audience | objectid | ObjectID |
-| application, resource, audience | tags | Service Principal Tag |
-| Company | tenantcountry | Tenant's country/region |
-
-**TransformationID:** The TransformationID element must be provided only if the Source element is set to "transformation".
--- This element must match the ID element of the transformation entry in the **ClaimsTransformation** property that defines how the data for this claim is generated.-
-**Claim Type:** The **JwtClaimType** and **SamlClaimType** elements define which claim this claim schema entry refers to.
--- The JwtClaimType must contain the name of the claim to be emitted in JWTs.-- The SamlClaimType must contain the URI of the claim to be emitted in SAML tokens.-
-* **onPremisesUserPrincipalName attribute:** When using an Alternate ID, the on-premises attribute userPrincipalName is synchronized with the Azure AD attribute onPremisesUserPrincipalName. This attribute is only available when Alternate ID is configured but is also available through MS Graph Beta: https://graph.microsoft.com/beta/me/.
-
-> [!NOTE]
-> Names and URIs of claims in the restricted claim set cannot be used for the claim type elements. For more information, see the "Exceptions and restrictions" section later in this article.
-
-### Claims transformation
-
-**String:** ClaimsTransformation
-
-**Data type:** JSON blob, with one or more transformation entries
-
-**Summary:** Use this property to apply common transformations to source data, to generate the output data for claims specified in the Claims Schema.
-
-**ID:** Use the ID element to reference this transformation entry in the TransformationID Claims Schema entry. This value must be unique for each transformation entry within this policy.
-
-**TransformationMethod:** The TransformationMethod element identifies which operation is performed to generate the data for the claim.
-
-Based on the method chosen, a set of inputs and outputs is expected. Define the inputs and outputs by using the **InputClaims**, **InputParameters** and **OutputClaims** elements.
-
-#### Table 4: Transformation methods and expected inputs and outputs
-
-|TransformationMethod|Expected input|Expected output|Description|
-|--|--|--|--|
-|Join|string1, string2, separator|outputClaim|Joins input strings by using a separator in between. For example: string1:"foo@bar.com" , string2:"sandbox" , separator:"." results in outputClaim:"foo@bar.com.sandbox"|
-|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other Schema Extensions which are storing a UPN or email address value for the user e.g. johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
-
-**InputClaims:** Use an InputClaims element to pass the data from a claim schema entry to a transformation. It has two attributes: **ClaimTypeReferenceId** and **TransformationClaimType**.
--- **ClaimTypeReferenceId** is joined with ID element of the claim schema entry to find the appropriate input claim.-- **TransformationClaimType** is used to give a unique name to this input. This name must match one of the expected inputs for the transformation method.-
-**InputParameters:** Use an InputParameters element to pass a constant value to a transformation. It has two attributes: **Value** and **ID**.
--- **Value** is the actual constant value to be passed.-- **ID** is used to give a unique name to the input. The name must match one of the expected inputs for the transformation method.-
-**OutputClaims:** Use an OutputClaims element to hold the data generated by a transformation, and tie it to a claim schema entry. It has two attributes: **ClaimTypeReferenceId** and **TransformationClaimType**.
--- **ClaimTypeReferenceId** is joined with the ID of the claim schema entry to find the appropriate output claim.-- **TransformationClaimType** is used to give a unique name to the output. The name must match one of the expected outputs for the transformation method.-
-### Exceptions and restrictions
-
-**SAML NameID and UPN:** The attributes from which you source the NameID and UPN values, and the claims transformations that are permitted, are limited. See table 5 and table 6 to see the permitted values.
-
-#### Table 5: Attributes allowed as a data source for SAML NameID
-
-|Source|ID|Description|
-|--|--|--|
-| User | mail|Email Address|
-| User | userprincipalname|User Principal Name|
-| User | onpremisessamaccountname|On Premises Sam Account Name|
-| User | employeeid|Employee ID|
-| User | extensionattribute1 | Extension Attribute 1 |
-| User | extensionattribute2 | Extension Attribute 2 |
-| User | extensionattribute3 | Extension Attribute 3 |
-| User | extensionattribute4 | Extension Attribute 4 |
-| User | extensionattribute5 | Extension Attribute 5 |
-| User | extensionattribute6 | Extension Attribute 6 |
-| User | extensionattribute7 | Extension Attribute 7 |
-| User | extensionattribute8 | Extension Attribute 8 |
-| User | extensionattribute9 | Extension Attribute 9 |
-| User | extensionattribute10 | Extension Attribute 10 |
-| User | extensionattribute11 | Extension Attribute 11 |
-| User | extensionattribute12 | Extension Attribute 12 |
-| User | extensionattribute13 | Extension Attribute 13 |
-| User | extensionattribute14 | Extension Attribute 14 |
-| User | extensionattribute15 | Extension Attribute 15 |
-
-#### Table 6: Transformation methods allowed for SAML NameID
-
-| TransformationMethod | Restrictions |
-| -- | -- |
-| ExtractMailPrefix | None |
-| Join | The suffix being joined must be a verified domain of the resource tenant. |
-
-### Cross-tenant scenarios
-
-Claims mapping policies do not apply to guest users. If a guest user tries to access an application with a claims mapping policy assigned to its service principal, the default token is issued (the policy has no effect).
-
-## Claims mapping policy assignment
-
-Claims mapping policies can only be assigned to service principal objects.
-
-### Example claims mapping policies
-
-In Azure AD, many scenarios are possible when you can customize claims emitted in tokens for specific service principals. In this section, we walk through a few common scenarios that can help you grasp how to use the claims mapping policy type.
+In this article, we walk through a few common scenarios that can help you grasp how to use the [claims mapping policy type](reference-claims-mapping-policy-type.md).
When creating a claims mapping policy, you can also emit a claim from a directory schema extension attribute in tokens. Use *ExtensionID* for the extension attribute instead of *ID* in the `ClaimsSchema` element. For more info on extension attributes, see [Using directory schema extension attributes](active-directory-schema-extensions.md).
-#### Prerequisites
+## Prerequisites
-In the following examples, you create, update, link, and delete policies for service principals. If you are new to Azure AD, we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
+In the following examples, you create, update, link, and delete policies for service principals. Claims mapping policies can only be assigned to service principal objects. If you are new to Azure AD, we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
To get started, do the following steps:
To get started, do the following steps:
Get-AzureADPolicy ```
-#### Example: Create and assign a policy to omit the basic claims from tokens issued to a service principal
+## Omit the basic claims from tokens
-In this example, you create a policy that removes the basic claim set from tokens issued to linked service principals.
+In this example, you create a policy that removes the [basic claim set](reference-claims-mapping-policy-type.md#claim-sets) from tokens issued to linked service principals.
1. Create a claims mapping policy. This policy, linked to specific service principals, removes the basic claim set from tokens. 1. To create the policy, run this command:
In this example, you create a policy that removes the basic claim set from token
Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy> ```
-#### Example: Create and assign a policy to include the EmployeeID and TenantCountry as claims in tokens issued to a service principal
+## Include the EmployeeID and TenantCountry as claims in tokens
In this example, you create a policy that adds the EmployeeID and TenantCountry to tokens issued to linked service principals. The EmployeeID is emitted as the name claim type in both SAML tokens and JWTs. The TenantCountry is emitted as the country/region claim type in both SAML tokens and JWTs. In this example, we continue to include the basic claims set in the tokens.
In this example, you create a policy that adds the EmployeeID and TenantCountry
Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy> ```
-#### Example: Create and assign a policy that uses a claims transformation in tokens issued to a service principal
+## Use a claims transformation in tokens
In this example, you create a policy that emits a custom claim "JoinedData" to JWTs issued to linked service principals. This claim contains a value created by joining the data stored in the extensionattribute1 attribute on the user object with ".sandbox". In this example, we exclude the basic claims set in the tokens.
This does require the requested token audience to use a verified domain name of
If you're not using a verified domain, Azure AD will return an `AADSTS501461` error code with message *"AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier, or use an application-specific signing key."*
-## See also
+## Next steps
+- Read the [claims mapping policy type](reference-claims-mapping-policy-type.md) reference article to learn more.
- To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md) - To learn more about extension attributes, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-saml-claims-customization.md
Select the desired source for the `NameIdentifier` (or NameID) claim. You can se
| Directory extensions | Directory extensions [synced from on-premises Active Directory using Azure AD Connect Sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md) | | Extension Attributes 1-15 | On-premises extension attributes used to extend the Azure AD schema |
-For more info, see [Table 3: Valid ID values per source](active-directory-claims-mapping.md#table-3-valid-id-values-per-source).
+For more info, see [Table 3: Valid ID values per source](reference-claims-mapping-policy-type.md#table-3-valid-id-values-per-source).
You can also assign any constant (static) value to any claims which you define in Azure AD. Please follow the below steps to assign a constant value:
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-claims-mapping-policy-type.md
+
+ Title: Claims mapping policy
+
+description: Learn about the claims mapping policy type, which is used to modify the claims emitted in tokens issued for specific applications.
++++++++ Last updated : 04/16/2021++++
+# Claims mapping policy type
+
+In Azure AD, a **Policy** object represents a set of rules enforced on individual applications or on all applications in an organization. Each type of policy has a unique structure, with a set of properties that are then applied to objects to which they are assigned.
+
+A claims mapping policy is a type of **Policy** object that [modifies the claims emitted in tokens](active-directory-claims-mapping.md) issued for specific applications.
+
+## Claim sets
+
+There are certain sets of claims that define how and when they're used in tokens.
+
+| Claim set | Description |
+|||
+| Core claim set | Are present in every token regardless of the policy. These claims are also considered restricted, and can't be modified. |
+| Basic claim set | Includes the claims that are emitted by default for tokens (in addition to the core claim set). You can omit or modify basic claims by using the claims mapping policies. |
+| Restricted claim set | Can't be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. |
+
+### Table 1: JSON Web Token (JWT) restricted claim set
+
+| Claim type (name) |
+| -- |
+| _claim_names |
+| _claim_sources |
+| access_token |
+| account_type |
+| acr |
+| actor |
+| actortoken |
+| aio |
+| altsecid |
+| amr |
+| app_chain |
+| app_displayname |
+| app_res |
+| appctx |
+| appctxsender |
+| appid |
+| appidacr |
+| assertion |
+| at_hash |
+| aud |
+| auth_data |
+| auth_time |
+| authorization_code |
+| azp |
+| azpacr |
+| c_hash |
+| ca_enf |
+| cc |
+| cert_token_use |
+| client_id |
+| cloud_graph_host_name |
+| cloud_instance_name |
+| cnf |
+| code |
+| controls |
+| credential_keys |
+| csr |
+| csr_type |
+| deviceid |
+| dns_names |
+| domain_dns_name |
+| domain_netbios_name |
+| e_exp |
+| email |
+| endpoint |
+| enfpolids |
+| exp |
+| expires_on |
+| grant_type |
+| graph |
+| group_sids |
+| groups |
+| hasgroups |
+| hash_alg |
+| home_oid |
+| `http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant` |
+| `http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod` |
+| `http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration` |
+| `http://schemas.microsoft.com/ws/2008/06/identity/claims/expired` |
+| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` |
+| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` |
+| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` |
+| iat |
+| identityprovider |
+| idp |
+| in_corp |
+| instance |
+| ipaddr |
+| isbrowserhostedapp |
+| iss |
+| jwk |
+| key_id |
+| key_type |
+| mam_compliance_url |
+| mam_enrollment_url |
+| mam_terms_of_use_url |
+| mdm_compliance_url |
+| mdm_enrollment_url |
+| mdm_terms_of_use_url |
+| nameid |
+| nbf |
+| netbios_name |
+| nonce |
+| oid |
+| on_prem_id |
+| onprem_sam_account_name |
+| onprem_sid |
+| openid2_id |
+| password |
+| polids |
+| pop_jwk |
+| preferred_username |
+| previous_refresh_token |
+| primary_sid |
+| puid |
+| pwd_exp |
+| pwd_url |
+| redirect_uri |
+| refresh_token |
+| refreshtoken |
+| request_nonce |
+| resource |
+| role |
+| roles |
+| scope |
+| scp |
+| sid |
+| signature |
+| signin_state |
+| src1 |
+| src2 |
+| sub |
+| tbid |
+| tenant_display_name |
+| tenant_region_scope |
+| thumbnail_photo |
+| tid |
+| tokenAutologonEnabled |
+| trustedfordelegation |
+| unique_name |
+| upn |
+| user_setting_sync_url |
+| username |
+| uti |
+| ver |
+| verified_primary_email |
+| verified_secondary_email |
+| wids |
+| win_ver |
+
+### Table 2: SAML restricted claim set
+
+| Claim type (URI) |
+| -- |
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/expired`|
+|`http://schemas.microsoft.com/identity/claims/accesstoken`|
+|`http://schemas.microsoft.com/identity/claims/openid2_id`|
+|`http://schemas.microsoft.com/identity/claims/identityprovider`|
+|`http://schemas.microsoft.com/identity/claims/objectidentifier`|
+|`http://schemas.microsoft.com/identity/claims/puid`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier [MR1]`|
+|`http://schemas.microsoft.com/identity/claims/tenantid`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod`|
+|`http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/groups`|
+|`http://schemas.microsoft.com/claims/groups.link`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/role`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/wids`|
+|`http://schemas.microsoft.com/2014/09/devicecontext/claims/iscompliant`|
+|`http://schemas.microsoft.com/2014/02/devicecontext/claims/isknown`|
+|`http://schemas.microsoft.com/2012/01/devicecontext/claims/ismanaged`|
+|`http://schemas.microsoft.com/2014/03/psso`|
+|`http://schemas.microsoft.com/claims/authnmethodsreferences`|
+|`http://schemas.xmlsoap.org/ws/2009/09/identity/claims/actor`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/samlissuername`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/confirmationkey`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authorizationdecision`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authentication`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlyprimarygroupsid`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlyprimarysid`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/denyonlysid`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlywindowsdevicegroup`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdeviceclaim`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdevicegroup`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsfqbnversion`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowssubauthority`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsuserclaim`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn`|
+|`http://schemas.microsoft.com/ws/2008/06/identity/claims/ispersistent`|
+|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/privatepersonalidentifier`|
+|`http://schemas.microsoft.com/identity/claims/scope`|
+
+## Claims mapping policy properties
+
+To control what claims are emitted and where the data comes from, use the properties of a claims mapping policy. If a policy is not set, the system issues tokens that include the core claim set, the basic claim set, and any [optional claims](active-directory-optional-claims.md) that the application has chosen to receive.
+
+> [!NOTE]
+> Claims in the core claim set are present in every token, regardless of what this property is set to.
+
+### Include basic claim set
+
+**String:** IncludeBasicClaimSet
+
+**Data type:** Boolean (True or False)
+
+**Summary:** This property determines whether the basic claim set is included in tokens affected by this policy.
+
+- If set to True, all claims in the basic claim set are emitted in tokens affected by the policy.
+- If set to False, claims in the basic claim set are not in the tokens, unless they are individually added in the claims schema property of the same policy.
+++
+### Claims schema
+
+**String:** ClaimsSchema
+
+**Data type:** JSON blob with one or more claim schema entries
+
+**Summary:** This property defines which claims are present in the tokens affected by the policy, in addition to the basic claim set and the core claim set.
+For each claim schema entry defined in this property, certain information is required. Specify where the data is coming from (**Value**, **Source/ID pair**, or **Source/ExtensionID pair**), and which claim the data is emitted as (**Claim Type**).
+
+### Claim schema entry elements
+
+**Value:** The Value element defines a static value as the data to be emitted in the claim.
+
+**Source/ID pair:** The Source and ID elements define where the data in the claim is sourced from.
+
+**Source/ExtensionID pair:** The Source and ExtensionID elements define the directory schema extension attribute where the data in the claim is sourced from. For more information, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
+
+Set the Source element to one of the following values:
+
+- "user": The data in the claim is a property on the User object.
+- "application": The data in the claim is a property on the application (client) service principal.
+- "resource": The data in the claim is a property on the resource service principal.
+- "audience": The data in the claim is a property on the service principal that is the audience of the token (either the client or resource service principal).
+- "company": The data in the claim is a property on the resource tenant's Company object.
+- "transformation": The data in the claim is from claims transformation (see the "Claims transformation" section later in this article).
+
+If the source is transformation, the **TransformationID** element must be included in this claim definition as well.
+
+The ID element identifies which property on the source provides the value for the claim. The following table lists the values of ID valid for each value of Source.
+
+#### Table 3: Valid ID values per source
+
+| Source | ID | Description |
+|--|--|--|
+| User | surname | Family Name |
+| User | givenname | Given Name |
+| User | displayname | Display Name |
+| User | objectid | ObjectID |
+| User | mail | Email Address |
+| User | userprincipalname | User Principal Name |
+| User | department|Department|
+| User | onpremisessamaccountname | On-premises SAM Account Name |
+| User | netbiosname| NetBios Name |
+| User | dnsdomainname | DNS Domain Name |
+| User | onpremisesecurityidentifier | On-premises Security Identifier |
+| User | companyname| Organization Name |
+| User | streetaddress | Street Address |
+| User | postalcode | Postal Code |
+| User | preferredlanguage | Preferred Language |
+| User | onpremisesuserprincipalname | On-premises UPN |
+| User | mailnickname | Mail Nickname |
+| User | extensionattribute1 | Extension Attribute 1 |
+| User | extensionattribute2 | Extension Attribute 2 |
+| User | extensionattribute3 | Extension Attribute 3 |
+| User | extensionattribute4 | Extension Attribute 4 |
+| User | extensionattribute5 | Extension Attribute 5 |
+| User | extensionattribute6 | Extension Attribute 6 |
+| User | extensionattribute7 | Extension Attribute 7 |
+| User | extensionattribute8 | Extension Attribute 8 |
+| User | extensionattribute9 | Extension Attribute 9 |
+| User | extensionattribute10 | Extension Attribute 10 |
+| User | extensionattribute11 | Extension Attribute 11 |
+| User | extensionattribute12 | Extension Attribute 12 |
+| User | extensionattribute13 | Extension Attribute 13 |
+| User | extensionattribute14 | Extension Attribute 14 |
+| User | extensionattribute15 | Extension Attribute 15 |
+| User | othermail | Other Mail |
+| User | country | Country/Region |
+| User | city | City |
+| User | state | State |
+| User | jobtitle | Job Title |
+| User | employeeid | Employee ID |
+| User | facsimiletelephonenumber | Facsimile Telephone Number |
+| User | assignedroles | list of App roles assigned to user|
+| application, resource, audience | displayname | Display Name |
+| application, resource, audience | objectid | ObjectID |
+| application, resource, audience | tags | Service Principal Tag |
+| Company | tenantcountry | Tenant's country/region |
+
+**TransformationID:** The TransformationID element must be provided only if the Source element is set to "transformation".
+
+- This element must match the ID element of the transformation entry in the **ClaimsTransformation** property that defines how the data for this claim is generated.
+
+**Claim Type:** The **JwtClaimType** and **SamlClaimType** elements define which claim this claim schema entry refers to.
+
+- The JwtClaimType must contain the name of the claim to be emitted in JWTs.
+- The SamlClaimType must contain the URI of the claim to be emitted in SAML tokens.
+
+* **onPremisesUserPrincipalName attribute:** When using an Alternate ID, the on-premises attribute userPrincipalName is synchronized with the Azure AD attribute onPremisesUserPrincipalName. This attribute is only available when Alternate ID is configured but is also available through MS Graph Beta: https://graph.microsoft.com/beta/me/.
+
+> [!NOTE]
+> Names and URIs of claims in the restricted claim set cannot be used for the claim type elements. For more information, see the "Exceptions and restrictions" section later in this article.
+
+### Claims transformation
+
+**String:** ClaimsTransformation
+
+**Data type:** JSON blob, with one or more transformation entries
+
+**Summary:** Use this property to apply common transformations to source data, to generate the output data for claims specified in the Claims Schema.
+
+**ID:** Use the ID element to reference this transformation entry in the TransformationID Claims Schema entry. This value must be unique for each transformation entry within this policy.
+
+**TransformationMethod:** The TransformationMethod element identifies which operation is performed to generate the data for the claim.
+
+Based on the method chosen, a set of inputs and outputs is expected. Define the inputs and outputs by using the **InputClaims**, **InputParameters** and **OutputClaims** elements.
+
+#### Table 4: Transformation methods and expected inputs and outputs
+
+|TransformationMethod|Expected input|Expected output|Description|
+|--|--|--|--|
+|Join|string1, string2, separator|outputClaim|Joins input strings by using a separator in between. For example: string1:"foo@bar.com" , string2:"sandbox" , separator:"." results in outputClaim:"foo@bar.com.sandbox"|
+|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other Schema Extensions which are storing a UPN or email address value for the user e.g. johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
+
+**InputClaims:** Use an InputClaims element to pass the data from a claim schema entry to a transformation. It has two attributes: **ClaimTypeReferenceId** and **TransformationClaimType**.
+
+- **ClaimTypeReferenceId** is joined with ID element of the claim schema entry to find the appropriate input claim.
+- **TransformationClaimType** is used to give a unique name to this input. This name must match one of the expected inputs for the transformation method.
+
+**InputParameters:** Use an InputParameters element to pass a constant value to a transformation. It has two attributes: **Value** and **ID**.
+
+- **Value** is the actual constant value to be passed.
+- **ID** is used to give a unique name to the input. The name must match one of the expected inputs for the transformation method.
+
+**OutputClaims:** Use an OutputClaims element to hold the data generated by a transformation, and tie it to a claim schema entry. It has two attributes: **ClaimTypeReferenceId** and **TransformationClaimType**.
+
+- **ClaimTypeReferenceId** is joined with the ID of the claim schema entry to find the appropriate output claim.
+- **TransformationClaimType** is used to give a unique name to the output. The name must match one of the expected outputs for the transformation method.
+
+### Exceptions and restrictions
+
+**SAML NameID and UPN:** The attributes from which you source the NameID and UPN values, and the claims transformations that are permitted, are limited. See table 5 and table 6 to see the permitted values.
+
+#### Table 5: Attributes allowed as a data source for SAML NameID
+
+|Source|ID|Description|
+|--|--|--|
+| User | mail|Email Address|
+| User | userprincipalname|User Principal Name|
+| User | onpremisessamaccountname|On Premises Sam Account Name|
+| User | employeeid|Employee ID|
+| User | extensionattribute1 | Extension Attribute 1 |
+| User | extensionattribute2 | Extension Attribute 2 |
+| User | extensionattribute3 | Extension Attribute 3 |
+| User | extensionattribute4 | Extension Attribute 4 |
+| User | extensionattribute5 | Extension Attribute 5 |
+| User | extensionattribute6 | Extension Attribute 6 |
+| User | extensionattribute7 | Extension Attribute 7 |
+| User | extensionattribute8 | Extension Attribute 8 |
+| User | extensionattribute9 | Extension Attribute 9 |
+| User | extensionattribute10 | Extension Attribute 10 |
+| User | extensionattribute11 | Extension Attribute 11 |
+| User | extensionattribute12 | Extension Attribute 12 |
+| User | extensionattribute13 | Extension Attribute 13 |
+| User | extensionattribute14 | Extension Attribute 14 |
+| User | extensionattribute15 | Extension Attribute 15 |
+
+#### Table 6: Transformation methods allowed for SAML NameID
+
+| TransformationMethod | Restrictions |
+| -- | -- |
+| ExtractMailPrefix | None |
+| Join | The suffix being joined must be a verified domain of the resource tenant. |
+
+### Cross-tenant scenarios
+
+Claims mapping policies do not apply to guest users. If a guest user tries to access an application with a claims mapping policy assigned to its service principal, the default token is issued (the policy has no effect).
++
+## Next steps
+
+- To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant (Preview)](active-directory-claims-mapping.md)
+- To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)
+- To learn more about extension attributes, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-ping-access-publishing-guide.md
Example to include email address into the access_token that PingAccess will cons
### Use of claims mapping policy (optional)
-[Claims Mapping Policy (preview)](../develop/active-directory-claims-mapping.md#claims-mapping-policy-properties) for attributes which do not exist in AzureAD. Claims mapping allows you to migrate old on-prem apps to the cloud by adding additional custom claims that are backed by your ADFS or user objects
+[Claims Mapping Policy (preview)](../develop/reference-claims-mapping-policy-type.md#claims-mapping-policy-properties) for attributes which do not exist in AzureAD. Claims mapping allows you to migrate old on-prem apps to the cloud by adding additional custom claims that are backed by your ADFS or user objects
-To make your application use a custom claim and include additional fields, be sure you've also [created a custom claims mapping policy and assigned it to the application](../develop/active-directory-claims-mapping.md#claims-mapping-policy-assignment).
+To make your application use a custom claim and include additional fields, be sure you've also [created a custom claims mapping policy and assigned it to the application](../develop/active-directory-claims-mapping.md).
> [!NOTE] > To use a custom claim, you must also have a custom policy defined and assigned to the application. This policy should include all required custom attributes. >
-> You can do policy definition and assignment through PowerShell or Microsoft Graph. If you're doing them in PowerShell, you may need to first use `New-AzureADPolicy` and then assign it to the application with `Add-AzureADServicePrincipalPolicy`. For more information, see [Claims mapping policy assignment](../develop/active-directory-claims-mapping.md#claims-mapping-policy-assignment).
+> You can do policy definition and assignment through PowerShell or Microsoft Graph. If you're doing them in PowerShell, you may need to first use `New-AzureADPolicy` and then assign it to the application with `Add-AzureADServicePrincipalPolicy`. For more information, see [Claims mapping policy assignment](../develop/active-directory-claims-mapping.md).
Example: ```powershell
active-directory Maverics Identity Orchestrator Saml Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md
This hybrid access tutorial demonstrates how to migrate an on-premises web appli
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * A Maverics Identity Orchestrator SAML Connector SSO-enabled subscription. To get the Maverics software, contact [Strata sales](mailto:sales@strata.io).
-* At least one application that uses header-based authentication. The examples work against an application called Sonar, which is hosted at https://app.sonarsystems.com, and an application called Connectulum, hosted at https://app.connectulum.com.
+* At least one application that uses header-based authentication. The examples work against an application called Connectulum, hosted at `https://app.connectulum.com`.
* A Linux machine to host the Maverics Orchestrator * OS: RHEL 7.7 or higher, CentOS 7+ * Disk: >= 10 GB
tls:
keyFile: /etc/maverics/maverics.key ```
-To confirm that TLS is configured as expected, restart the Maverics service, and make a request to the status endpoint. From your browser, request https://sonar.maverics.com/status.
+To confirm that TLS is configured as expected, restart the Maverics service, and make a request to the status endpoint.
## Step 2: Proxy an application
appgateways:
upstream: https://app.sonarsystems.com ```
-To confirm that proxying is working as expected, restart the Maverics service, and make a request to the application through the Maverics proxy. From your browser, request https://sonar.maverics.com. You can optionally make a request to specific application resources, for example, `https://sonar.maverics.com/RESOURCE`, where `RESOURCE` is a valid application resource of the protected upstream app.
+To confirm that proxying is working as expected, restart the Maverics service, and make a request to the application through the Maverics proxy. You can optionally make a request to specific application resources.
## Step 3: Register an enterprise application in Azure AD
connectors:
You might have noticed that the code adds a `host` field to your App Gateway definitions. The `host` field enables the Maverics Orchestrator to distinguish which upstream host to proxy traffic to.
-To confirm that the newly added App Gateway is working as expected, make a request to https://connectulum.maverics.com.
+To confirm that the newly added App Gateway is working as expected, make a request to `https://connectulum.maverics.com`.
## Advanced scenarios
active-directory Policystat Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/policystat-tutorial.md
To configure Azure AD single sign-on with PolicyStat, perform the following step
`https://<companyname>.policystat.com/saml2/metadata/` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [PolicyStat Client support team](http://www.policystat.com/support/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [PolicyStat Client support team](https://rldatix.com/services-support/support) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
When you click the PolicyStat tile in the Access Panel, you should be automatica
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Tidemark Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tidemark-tutorial.md
To configure Azure AD single sign-on with Tidemark, perform the following steps:
- `https://<subdomain>.tidemark.net/saml` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Tidemark Client support team](http://www.tidemark.com/contact-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact Tidemark Client support team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with Tidemark, perform the following steps:
### Configure Tidemark Single Sign-On
-To configure single sign-on on **Tidemark** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Tidemark support team](http://www.tidemark.com/contact-us). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Tidemark** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to Tidemark support team. They set this setting to have the SAML SSO connection set properly on both sides.
### Create an Azure AD test user
In this section, you enable Britta Simon to use Azure single sign-on by granting
### Create Tidemark test user
-In this section, you create a user called Britta Simon in Tidemark. Work with [Tidemark support team](http://www.tidemark.com/contact-us) to add the users in the Tidemark platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Tidemark. Work with Tidemark support team to add the users in the Tidemark platform. Users must be created and activated before you use single sign-on.
### Test single sign-on
When you click the Tidemark tile in the Access Panel, you should be automaticall
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Topdesk Public Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/topdesk-public-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<companyname>.topdesk.net/tas/public/login/verify` > [!NOTE]
- > If the **Identifier** and **Reply URL** values do not get auto populated, you need to enter them manually. For Identifier, follow the pattern as mentioned above and you get Reply URL value from the **Configure TOPdesk - Public Single Sign-On** section which is explained later in the tutorial. The **Sign-on URL** value is not real, so you need to update the value with the actual Sign-On URL. Contact [TOPdesk - Public Client support team](https://help.topdesk.com/saas/enterprise/user/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > If the **Identifier** and **Reply URL** values do not get auto populated, you need to enter them manually. For Identifier, follow the pattern as mentioned above and you get Reply URL value from the **Configure TOPdesk - Public Single Sign-On** section which is explained later in the tutorial. The **Sign-on URL** value is not real, so you need to update the value with the actual Sign-On URL. Contact [TOPdesk - Public Client support team](https://my.topdesk.com/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
active-directory Wingspanetmf Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/wingspanetmf-tutorial.md
To configure Azure AD single sign-on with Wingspan eTMF, perform the following s
`https://<customer name>.<instance name>.mywingspan.com/` > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [Wingspan eTMF Client support team](https://www.wingspan.com/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact Wingspan eTMF Client support team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with Wingspan eTMF, perform the following s
### Configure Wingspan eTMF Single Sign-On
-To configure single sign-on on **Wingspan eTMF** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Wingspan eTMF support team](https://www.wingspan.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Wingspan eTMF** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to Wingspan eTMF support team. They set this setting to have the SAML SSO connection set properly on both sides.
### Create an Azure AD test user
In this section, you enable Britta Simon to use Azure single sign-on by granting
### Create Wingspan eTMF test user
-In this section, you create a user called Britta Simon in Wingspan eTMF. Work with [Wingspan eTMF support team](https://www.wingspan.com/contact-us/) to add the users in the Wingspan eTMF platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Wingspan eTMF. Work with Wingspan eTMF support team to add the users in the Wingspan eTMF platform. Users must be created and activated before you use single sign-on.
### Test single sign-on
When you click the Wingspan eTMF tile in the Access Panel, you should be automat
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
aks Servicemesh Osm About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/servicemesh-osm-about.md Binary files differ
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
Then, create an AKS cluster:
az aks create -g myResourceGroup -n myManagedCluster --enable-managed-identity ```
-A successful cluster creation using managed identities contains this service principal profile information:
-
-```output
-"servicePrincipalProfile": {
- "clientId": "msi"
- }
-```
-
-Use the following command to query objectid of your control plane managed identity:
-
-```azurecli-interactive
-az aks show -g myResourceGroup -n myManagedCluster --query "identity"
-```
-
-The result should look like:
-
-```output
-{
- "principalId": "<object_id>",
- "tenantId": "<tenant_id>",
- "type": "SystemAssigned"
-}
-```
- Once the cluster is created, you can then deploy your application workloads to the new cluster and interact with it just as you've done with service-principal-based AKS clusters.
-> [!NOTE]
-> For creating and using your own VNet, static IP address, or attached Azure disk where the resources are outside of the worker node resource group, use the PrincipalID of the cluster System Assigned Managed Identity to perform a role assignment. For more information on role assignment, see [Delegate access to other Azure resources](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
->
-> Permission grants to cluster Managed Identity used by Azure Cloud provider may take up 60 minutes to populate.
- Finally, get credentials to access the cluster: ```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myManagedCluster ```+ ## Update an AKS cluster to managed identities (Preview) You can now update an AKS cluster currently working with service principals to work with managed identities by using the following CLI commands.
az aks update -g <RGName> -n <AKSName> --enable-managed-identity --assign-identi
> [!NOTE] > Once the system-assigned or user-assigned identities have been updated to managed identity, perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity.
+## Obtain and use the system-assigned managed identity for your AKS cluster
+
+Confirm your AKS cluster is using managed identity with the following CLI command:
+
+```azurecli-interactive
+az aks show -g <RGName> -n <ClusterName> --query "servicePrincipalProfile"
+```
+
+If the cluster is using managed identities, you will see a `clientId` value of "msi". A cluster using a Service Principal instead will instead show the object ID. For example:
+
+```output
+{
+ "clientId": "msi"
+}
+```
+
+After verifying the cluster is using managed identities, you can find the control plane system-assigned identity's object ID with the following command:
+
+```azurecli-interactive
+az aks show -g <RGName> -n <ClusterName> --query "identity"
+```
+
+```output
+{
+ "principalId": "<object-id>",
+ "tenantId": "<tenant-id>",
+ "type": "SystemAssigned",
+ "userAssignedIdentities": null
+},
+```
+
+> [!NOTE]
+> For creating and using your own VNet, static IP address, or attached Azure disk where the resources are outside of the worker node resource group, use the PrincipalID of the cluster System Assigned Managed Identity to perform a role assignment. For more information on role assignment, see [Delegate access to other Azure resources](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
+>
+> Permission grants to cluster Managed Identity used by Azure Cloud provider may take up 60 minutes to populate.
++ ## Bring your own control plane MI A custom control plane identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNET or outboundType of UDR with a pre-created managed identity.
api-management Import Function App As Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/import-function-app-as-api.md
na Previously updated : 04/22/2020 Last updated : 04/16/2021
Azure API Management supports importing Azure Function Apps as new APIs or appending them to existing APIs. The process automatically generates a host key in the Azure Function App, which is then assigned to a named value in Azure API Management.
-This article walks through importing an Azure Function App as an API in Azure API Management. It also describes the testing process.
+This article walks through importing and testing an Azure Function App as an API in Azure API Management.
You will learn how to:
You will learn how to:
> * Append an Azure Function App to an API > * View the new Azure Function App host key and Azure API Management named value > * Test the API in the Azure portal
-> * Test the API in the developer portal
## Prerequisites
-* Complete the quickstart [Create an Azure API Management instance](get-started-create-service-instance.md).
-* Make sure you have an Azure Functions app in your subscription. For more information, see [Create an Azure Function App](../azure-functions/functions-get-started.md). It has to contain Functions with HTTP trigger and authorization level setting set to *Anonymous* or *Function*.
+* Complete the [Create an Azure API Management instance](get-started-create-service-instance.md) quickstart.
+* Make sure you have an Azure Functions app in your subscription. For more information, see [Create an Azure Function App](../azure-functions/functions-get-started.md). Functions must have HTTP trigger and authorization level set to *Anonymous* or *Function*.
+
+> [!NOTE]
+> You can use the API Management Extension for Visual Studio Code to import and manage your APIs. Follow the [API Management Extension tutorial](visual-studio-code-tutorial.md) to install and get started.
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)]
Follow the steps below to create a new API from an Azure Function App.
2. In the **Add a new API** list, select **Function App**.
- ![Screenshot that shows the Function App tile.](./media/import-function-app-as-api/add-01.png)
+ :::image type="content" source="./media/import-function-app-as-api/add-01.png" alt-text="Screenshot that shows the Function App tile.":::
3. Click **Browse** to select Functions for import.
- ![Screenshot that highlights the Browse button.](./media/import-function-app-as-api/add-02.png)
+ :::image type="content" source="./media/import-function-app-as-api/add-02.png" alt-text="Screenshot that highlights the Browse button.":::
4. Click on the **Function App** section to choose from the list of available Function Apps.
- ![Screenshot that highlights the Function App section.](./media/import-function-app-as-api/add-03.png)
+ :::image type="content" source="./media/import-function-app-as-api/add-03.png" alt-text="Screenshot that highlights the Function App section.":::
5. Find the Function App you want to import Functions from, click on it and press **Select**.
- ![Screenshot that highlights the Function App you want to import Functions from and the Select button.](./media/import-function-app-as-api/add-04.png)
+ :::image type="content" source="./media/import-function-app-as-api/add-04.png" alt-text="Screenshot that highlights the Function App you want to import Functions from and the Select button.":::
6. Select the Functions you would like to import and click **Select**.
+ * You can only import Functions based off HTTP trigger with *Anonymous* or *Function* authorization levels.
+
+ :::image type="content" source="./media/import-function-app-as-api/add-05.png" alt-text="Screenshot that highlights the Functions to import and the Select button.":::
- ![Screenshot that highlights the Functions to import and the Select button.](./media/import-function-app-as-api/add-05.png)
+7. Switch to the **Full** view and assign **Product** to your new API.
+8. If needed, specify other fields during creation or configure them later via the **Settings** tab.
+ * The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
- > [!NOTE]
- > You can import only Functions that are based off HTTP trigger and have the authorization level setting set to *Anonymous* or *Function*.
+ >[!NOTE]
+ > Products are associations of one or more APIs offered to developers through the developer portal. First, developers must subscribe to a product to get access to the API. Once subscribed, they get a subscription key for any API in that product. As creator of the API Management instance, you are an administrator and are subscribed to every product by default.
+ >
+ > Each API Management instance comes with two default sample products:
+ > - **Starter**
+ > - **Unlimited**
-7. Switch to the **Full** view and assign **Product** to your new API. If needed, specify other fields during creation or configure them later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
-8. Click **Create**.
+9. Click **Create**.
## <a name="append-azure-function-app-to-api"></a> Append Azure Function App to an existing API
Follow the steps below to append Azure Function App to an existing API.
2. Choose an API you want to import an Azure Function App to. Click **...** and select **Import** from the context menu.
- ![Screenshot that highlights the Import menu option.](./media/import-function-app-as-api/append-01.png)
+ :::image type="content" source="./media/import-function-app-as-api/append-function-api-1.png" alt-text="Screenshot that highlights the Import menu option.":::
3. Click on the **Function App** tile.
- ![Screenshot that highlights the Function App tile.](./media/import-function-app-as-api/append-02.png)
+ :::image type="content" source="./media/import-function-app-as-api/append-function-api-2.png" alt-text="Screenshot that highlights the Function App tile.":::
4. In the pop-up window, click **Browse**.
- ![Screenshot that shows the Browse button.](./media/import-function-app-as-api/append-03.png)
+ :::image type="content" source="./media/import-function-app-as-api/append-function-api-3.png" alt-text="Screenshot that shows the Browse button.":::
5. Click on the **Function App** section to choose from the list of available Function Apps.
- ![Screenshot that highlights the list of Function Apps.](./media/import-function-app-as-api/add-03.png)
+ :::image type="content" source="./media/import-function-app-as-api/add-03.png" alt-text="Screenshot that highlights the list of Function Apps.":::
6. Find the Function App you want to import Functions from, click on it and press **Select**.
- ![Screenshot that highlights the Function App you want to import functions from.](./media/import-function-app-as-api/add-04.png)
+ :::image type="content" source="./media/import-function-app-as-api/add-04.png" alt-text="Screenshot that highlights the Function App you want to import functions from.":::
7. Select the Functions you would like to import and click **Select**.
- ![Screenshot that highlights the functnios you'd like to import.](./media/import-function-app-as-api/add-05.png)
+ :::image type="content" source="./media/import-function-app-as-api/add-05.png" alt-text="Screenshot that highlights the functions you'd like to import.":::
8. Click **Import**.
- ![Append from Function App](./media/import-function-app-as-api/append-04.png)
+ :::image type="content" source="./media/import-function-app-as-api/append-function-api-4.png" alt-text="Append from Function App":::
## <a name="authorization"></a> Authorization
Import of an Azure Function App automatically generates:
* Host key inside the Function App with the name apim-{*your Azure API Management service instance name*}, * Named value inside the Azure API Management instance with the name {*your Azure Function App instance name*}-key, which contains the created host key.
-For APIs created after April 4th 2019, the host key is passed in HTTP requests from API Management to the Function App in a header. Older APIs pass the host key as [a query parameter](../azure-functions/functions-bindings-http-webhook-trigger.md#api-key-authorization). This behavior may be changed through the `PATCH Backend` [REST API call](/rest/api/apimanagement/2019-12-01/backend/update#backendcredentialscontract) on the *Backend* entity associated with the Function App.
+For APIs created after April 4th 2019, the host key is passed in HTTP requests from API Management to the Function App in a header. Older APIs pass the host key as [a query parameter](../azure-functions/functions-bindings-http-webhook-trigger.md#api-key-authorization). You can change this behavior through the `PATCH Backend` [REST API call](/rest/api/apimanagement/2019-12-01/backend/update#backendcredentialscontract) on the *Backend* entity associated with the Function App.
> [!WARNING]
-> Removing or changing value of either the Azure Function App host key or Azure API Management named value will break the communication between the services. The values do not sync automatically.
+> Removing or changing either the Azure Function App host key value or the Azure API Management named value will break the communication between the services. The values do not sync automatically.
> > If you need to rotate the host key, make sure the named value in Azure API Management is also modified.
For APIs created after April 4th 2019, the host key is passed in HTTP requests f
1. Navigate to your Azure Function App instance.
-2. Select **Function App settings** from the overview.
+ :::image type="content" source="./media/import-function-app-as-api/keys-01.png" alt-text="Screenshot that highlights selecting your Function app instance.":::
- ![Screenshot that highlights the Function Apps settings option.](./media/import-function-app-as-api/keys-02-a.png)
+2. In the **Functions** section of the side navigation menu, select **App keys**.
-3. The key is located in the **Host Keys** section.
+ :::image type="content" source="./media/import-function-app-as-api/keys-02b.png" alt-text="Screenshot that highlights the Function Apps settings option.":::
- ![Screenshot that highlights the Host Keys section.](./media/import-function-app-as-api/keys-02-b.png)
+3. Find the keys under the **Host keys** section.
+
+ :::image type="content" source="./media/import-function-app-as-api/keys-03.png" alt-text="Screenshot that highlights the Host Keys section.":::
### Access the named value in Azure API Management Navigate to your Azure API Management instance and select **Named values** from the menu on the left. The Azure Function App key is stored there.
-![Add from Function App](./media/import-function-app-as-api/keys-01.png)
## <a name="test-in-azure-portal"></a> Test the new API in the Azure portal You can call operations directly from the Azure portal. Using the Azure portal is a convenient way to view and test the operations of an API. + 1. Select the API that you created in the preceding section. 2. Select the **Test** tab.
-3. Select an operation.
+3. Select the operation you want to test.
- The page displays fields for query parameters and fields for the headers. One of the headers is **Ocp-Apim-Subscription-Key**, for the subscription key of the product that is associated with this API. If you created the API Management instance, you are an administrator already, so the key is filled in automatically.
+ * The page displays fields for query parameters and headers.
+ * One of the headers is "Ocp-Apim-Subscription-Key", for the product subscription key associated with this API.
+ * As creator of the API Management instance, you are an administrator already, so the key is filled in automatically.
4. Select **Send**.
- The back end responds with **200 OK** and some data.
+ * When the test succeeds, the back end responds with **200 OK** and some data.
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
api-management Import Logic App As Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/import-logic-app-as-api.md
na Previously updated : 04/22/2020 Last updated : 04/16/2021
In this article, you learn how to:
> > - Import a Logic App as an API > - Test the API in the Azure portal
-> - Test the API in the Developer portal
## Prerequisites
In this article, you learn how to:
## <a name="create-api"> </a>Import and publish a back-end API 1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu.
-2. Select **Logic App** from the **Add a new API** list.
+1. Select **Logic App** from the **Add a new API** list.
- ![Logic app](./media/import-logic-app-as-api/logic-app-api.png)
+ :::image type="content" source="./media/import-logic-app-as-api/logic-app-select.png" alt-text="Select logic app category":::
-3. Press **Browse** to see the list of Logic Apps with HTTP trigger in your subscription. (Note that Logic Apps without HTTP trigger will not appear in the list.)
-4. Select the app. API Management finds the swagger associated with the selected app, fetches it, and imports it.
-5. Add an API URL suffix. The suffix is a name that identifies this specific API in this API Management instance. It has to be unique in this API Management instance.
-6. Publish the API by associating the API with a product. In this case, the "_Unlimited_" product is used. If you want for the API to be published and be available to developers, add it to a product. You can do it during API creation or set it later.
+1. Press **Browse** to see the list of Logic Apps with HTTP trigger in your subscription.
+ * Logic apps *without* HTTP trigger will not appear in the list.
- Products are associations of one or more APIs. You can include a number of APIs and offer them to developers through the developer portal. Developers must first subscribe to a product to get access to the API. When they subscribe, they get a subscription key that is good for any API in that product. If you created the API Management instance, you are an administrator already, so you are subscribed to every product by default.
+ :::image type="content" source="./media/import-logic-app-as-api/browse-logic-apps.png" alt-text="Browse for existing logic apps with correct trigger":::
- By default, each API Management instance comes with two sample products:
+1. Select the logic app.
- - **Starter**
- - **Unlimited**
+ :::image type="content" source="./media/import-logic-app-as-api/select-logic-app-import-2.png" alt-text="Select logic app":::
-7. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
-8. Select **Create**.
+1. API Management finds the swagger associated with the selected app, fetches it, and imports it.
+1. Add an API URL suffix.
+ * The suffix uniquely identifies this specific API in this API Management instance.
+
+ :::image type="content" source="./media/import-logic-app-as-api/create-from-logic-app.png" alt-text="Finish up fields":::
+
+1. If you want the API to be published and available to developers, Switch to the **Full** view and associate it with a **Product**. We use the *"Unlimited"* product in this example.
+ * You can add your API to a product either during creation or later via the **Settings** tab.
+
+ >[!NOTE]
+ > Products are associations of one or more APIs offered to developers through the developer portal. First, developers must subscribe to a product to get access to the API. Once subscribed, they get a subscription key for any API in that product. As creator of the API Management instance, you are an administrator and are subscribed to every product by default.
+ >
+ > Each API Management instance comes with two default sample products:
+ > - **Starter**
+ > - **Unlimited**
+
+1. Enter other API settings.
+ * You can set these values during creation or later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+1. Select **Create**.
## Test the API in the Azure portal Operations can be called directly from the Azure portal, which provides a convenient way to view and test the operations of an API. + 1. Select the API you created in the previous step. 2. Press the **Test** tab.
-3. Select some operation.
+3. Select the operation you want to test.
- The page displays fields for query parameters and fields for the headers. One of the headers is "Ocp-Apim-Subscription-Key", for the subscription key of the product that is associated with this API. If you created the API Management instance, you are an administrator already, so the key is filled in automatically.
+ * The page displays fields for query parameters and headers.
+ * One of the headers is "Ocp-Apim-Subscription-Key", for the product subscription key associated with this API.
+ * As creator of the API Management instance, you are an administrator already, so the key is filled in automatically.
4. Press **Send**.
- Backend responds with **200 OK** and some data.
+ * When the test succeeds, the backend responds with **200 OK** and data.
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-append-apis.md)] >[!NOTE]
->Every Logic App has **manual-invoke** operation. If you want to comprise your API of multiple logic apps, in order not to have collision, you need to rename the function.
+>Every Logic App has **manual-invoke** operation. To comprise your API of multiple logic apps and avoid collision, you need to rename the function.
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/availability-overview.md
You can set up availability tests for any HTTP or HTTPS endpoint that is accessi
## Types of availability tests
-There are four types of availability tests:
+There are three types of availability tests:
* [URL ping test](monitor-web-app-availability.md): This category has two simple tests you can create through the portal.
- - Basic ping test: A simple test that you can create in the Azure portal.
- - Standard ping test: A more advanced standard ping test with features like using any HTTP request methods(for example `GET`,`HEAD`,`POST`,etc) or adding custom headers.
* [Multi-step web test](availability-multistep.md): A recording of a sequence of web requests, which can be played back to test more complex scenarios. Multi-step web tests are created in Visual Studio Enterprise and uploaded to the portal for execution. * [Custom Track Availability Tests](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability): If you decide to create a custom application to run availability tests, the `TrackAvailability()` method can be used to send the results to Application Insights.
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/monitor-web-app-availability.md
The name "URL ping test" is a bit of a misnomer. To be clear, these tests are not making any use of ICMP (Internet Control Message Protocol) to check your site's availability. Instead they use more advanced HTTP request functionality to validate whether an endpoint is responding. They also measure the performance associated with that response, and adds the ability to set custom success criteria coupled with more advanced features like parsing dependent requests, and allowing for retries.
-There are two types of URL ping test you can create, basic and standard ping tests.
-
-> [!NOTE]
-> Basic and Standard ping tests are currently in public preview. These preview versions are provided without a service level agreement. Certain features might not be supported or might have constrained capabilities.
-
-Basic vs Standard:
--- Basic is restricted to five locations per test.-- Standard tests can have custom headers or request body.-- Standard tests can use any HTTP request method while basic can only use `GET`.-- SSL certificate lifetime check alerts you of a set period time before your certificate expires.-- Standard tests are a paid feature.-
-> [!NOTE]
-> There are currently no additional charges for the preview feature Standard Ping tests. Pricing for features that are in preview will be announced in the future and a notice provided prior to start of billing. Should you choose to continue using Standard Ping tests after the notice period, you will be billed at the applicable rate.
-
-## Create a URL ping test
- In order to create an availability test, you need use an existing Application Insight resource or [create an Application Insights resource](create-new-resource.md).
-To create your first availability request, open the Availability pane and selectΓÇ» Create Test & choose your test SKU.
+To create your first availability request, open the Availability pane and selectΓÇ» **Create Test**.
+
+## Create a test
-|Setting | Explanation |
-|--|-|
+|Setting| Explanation
+|-|-|-|
|**URL** | The URL can be any web page you want to test, but it must be visible from the public internet. The URL can include a query string. So, for example, you can exercise your database a little. If the URL resolves to a redirect, we follow it up to 10 redirects.|
-|**Parse dependent requests**| Test requests images, scripts, style files, and other files that are part of the web page under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources cannot be successfully downloaded within the timeout for the whole test. If the option is not checked, the test only requests the file at the URL you specified. Enabling this option results in a stricter check. The test could fail for cases, which may not be noticeable when manually browsing the site. |
-|**Enable retries**| When the test fails, it is retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. **We recommend this option**. On average, about 80% of failures disappear on retry.|
-| **SSL certificate validation test** | You can verify the SSL certificate on your website to make sure it is correctly installed, valid, trusted and doesn't give any errors to any of your users. |
-| **Proactive lifetime check** | This enables you to define a set time period before your SSL certificate expires. Once it expires your test will fail. |
+|**Parse dependent requests**| Test requests images, scripts, style files, and other files that are part of the web page under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources cannot be successfully downloaded within the timeout for the whole test. If the option is not checked, the test only requests the file at the URL you specified. Enabling this option results in a stricter check. The test could fail for cases, which may not be noticeable when manually browsing the site.
+|**Enable retries**|when the test fails, it is retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. **We recommend this option**. On average, about 80% of failures disappear on retry.|
|**Test frequency**| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.|
-|**Test locations**| Are the places from where our servers send web requests to your URL. **Our minimum number of recommended test locations is five** in order to insure that you can distinguish problems in your website from network issues. You can select more than five locations with standard test and up to 16 locations.|
+|**Test locations**| Are the places from where our servers send web requests to your URL. **Our minimum number of recommended test locations is five** in order to insure that you can distinguish problems in your website from network issues. You can select up to 16 locations.
**If your URL is not visible from the public internet, you can choose to selectively open up your firewall to allow only the test transactions through**. To learn more about the firewall exceptions for our availability test agents, consult the [IP address guide](./ip-addresses.md#availability-tests). > [!NOTE] > We strongly recommend testing from multiple locations with **a minimum of five locations**. This is to prevent false alarms that may result from transient issues with a specific location. In addition we have found that the optimal configuration is to have the **number of test locations be equal to the alert location threshold + 2**.
-## Standard Test
--
-|Setting | Explanation |
-|--|-|
-| **Custom headers** | Key value pairs that define the operating parameters. |
-| **HTTP request verb** | Indicate what action you would like to take with your request. IF your chosen verb is not available in the UI you can deploy a standard test using Azure Resource Monitor with the desired choice. |
-| **Request body** | Custom data associated with your HTTP request. You can upload type own files type in your content, or disable this feature. For raw body content we support TEXT, JSON, HTML, XML, and JavaScript. |
- ## Success criteria |Setting| Explanation
To create your first availability request, open the Availability pane and select
The following population tags can be used for the geo-location attribute when deploying an availability URL ping test using Azure Resource Manager.
-#### Azure gov
+### Azure gov
| Display Name | Population Name | |-||
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 04/01/2021 Last updated : 04/15/2021 # Supported metrics with Azure Monitor
For important additional information, see [Monitoring Agents Overview](../agents
|Unusable Cores|Yes|Unusable Cores|Count|Average|Number of unusable cores|Scenario, ClusterName| |Unusable Nodes|Yes|Unusable Nodes|Count|Average|Number of unusable nodes|Scenario, ClusterName|
+## microsoft.bing/accounts
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|BlockedCalls|Yes|Blocked Calls|Count|Total|Number of calls that exceeded the rate or quota limit|ApiName, ServingRegion, StatusCode|
+|ClientErrors|Yes|Client Errors|Count|Total|Number of calls with any client error (HTTP status code 4xx)|ApiName, ServingRegion, StatusCode|
+|DataIn|Yes|Data In|Bytes|Total|Incoming request Content-Length in bytes|ApiName, ServingRegion, StatusCode|
+|DataOut|Yes|Data Out|Bytes|Total|Outgoing response Content-Length in bytes|ApiName, ServingRegion, StatusCode|
+|Latency|Yes|Latency|Milliseconds|Average|Latency in milliseconds|ApiName, ServingRegion, StatusCode|
+|ServerErrors|Yes|Server Errors|Count|Total|Number of calls with any server error (HTTP status code 5xx)|ApiName, ServingRegion, StatusCode|
+|SuccessfulCalls|Yes|Successful Calls|Count|Total|Number of successful calls (HTTP status code 2xx)|ApiName, ServingRegion, StatusCode|
+|TotalCalls|Yes|Total Calls|Count|Total|Total number of calls|ApiName, ServingRegion, StatusCode|
+|TotalErrors|Yes|Total Errors|Count|Total|Number of calls with any error (HTTP status code 4xx or 5xx)|ApiName, ServingRegion, StatusCode|
+ ## Microsoft.Blockchain/blockchainMembers
For important additional information, see [Monitoring Agents Overview](../agents
|IoTConnectorMeasurementIngestionLatencyMs|Yes|Average Group Stage Latency|Milliseconds|Average|The time period between when the IoT Connector received the device data and when the data is processed by the FHIR conversion stage.|Operation, ConnectorName| |IoTConnectorNormalizedEvent|Yes|Number of Normalized Messages|Count|Sum|The total number of mapped normalized values outputted from the normalization stage of the the Azure IoT Connector for FHIR.|Operation, ConnectorName| |IoTConnectorTotalErrors|Yes|Total Error Count|Count|Sum|The total number of errors logged by the Azure IoT Connector for FHIR|Name, Operation, ErrorType, ErrorSeverity, ConnectorName|
+|ServiceApiErrors|Yes|Service Errors|Count|Sum|The total number of internal server errors generated by the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
+|ServiceApiLatency|Yes|Service Latency|Milliseconds|Average|The response latency of the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
+|ServiceApiRequests|Yes|Service Requests|Count|Sum|The total number of requests received by the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
|TotalErrors|Yes|Total Errors|Count|Sum|The total number of internal server errors encountered by the service.|Protocol, StatusCode, StatusCodeClass, StatusCodeText| |TotalLatency|Yes|Total Latency|Milliseconds|Average|The response latency of the service.|Protocol| |TotalRequests|Yes|Total Requests|Count|Sum|The total number of requests received by the service.|Protocol|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|c2d.commands.failure|Yes|Failed command invocations|Count|Total|The count of all failed command requests initiated from IoT Central|No Dimensions|
+|c2d.commands.requestSize|Yes|Request size of command invocations|Bytes|Total|Request size of all command requests initiated from IoT Central|No Dimensions|
+|c2d.commands.responseSize|Yes|Response size of command invocations|Bytes|Total|Response size of all command responses initiated from IoT Central|No Dimensions|
+|c2d.commands.success|Yes|Successful command invocations|Count|Total|The count of all successful command requests initiated from IoT Central|No Dimensions|
|c2d.property.read.failure|Yes|Failed Device Property Reads from IoT Central|Count|Total|The count of all failed property reads initiated from IoT Central|No Dimensions| |c2d.property.read.success|Yes|Successful Device Property Reads from IoT Central|Count|Total|The count of all successful property reads initiated from IoT Central|No Dimensions| |c2d.property.update.failure|Yes|Failed Device Property Updates from IoT Central|Count|Total|The count of all failed property updates initiated from IoT Central|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|d2c.property.read.success|Yes|Successful Device Property Reads from Devices|Count|Total|The count of all successful property reads initiated from devices|No Dimensions| |d2c.property.update.failure|Yes|Failed Device Property Updates from Devices|Count|Total|The count of all failed property updates initiated from devices|No Dimensions| |d2c.property.update.success|Yes|Successful Device Property Updates from Devices|Count|Total|The count of all successful property updates initiated from devices|No Dimensions|
+|d2c.telemetry.ingress.allProtocol|Yes|Total Telemetry Message Send Attempts|Count|Total|Number of device-to-cloud telemetry messages attempted to be sent to the IoT Central application|No Dimensions|
+|d2c.telemetry.ingress.success|Yes|Total Telemetry Messages Sent|Count|Total|Number of device-to-cloud telemetry messages successfully sent to the IoT Central application|No Dimensions|
|dataExport.error|Yes|Data Export Errors|Count|Total|Number of errors encountered for data export|exportId, exportDisplayName, destinationId, destinationDisplayName| |dataExport.messages.filtered|Yes|Data Export Messages Filtered|Count|Total|Number of messages that have passed through filters in data export|exportId, exportDisplayName, destinationId, destinationDisplayName| |dataExport.messages.received|Yes|Data Export Messages Received|Count|Total|Number of messages incoming to data export, before filtering and enrichment processing|exportId, exportDisplayName, destinationId, destinationDisplayName| |dataExport.messages.written|Yes|Data Export Messages Written|Count|Total|Number of messages written to a destination|exportId, exportDisplayName, destinationId, destinationDisplayName|-
+|dataExport.statusChange|Yes|Data Export Status Change|Count|Total|Number of status changes|exportId, exportDisplayName, destinationId, destinationDisplayName, status|
+|deviceDataUsage|Yes|Total Device Data Usage|Bytes|Total|Bytes transferred to and from any devices connected to IoT Central application|No Dimensions|
+|provisionedDeviceCount|No|Total Provisioned Devices|Count|Average|Number of devices provisioned in IoT Central application|No Dimensions|
## microsoft.keyvault/managedhsms
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|No Dimensions|
|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions| |OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions|-
+|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|No Dimensions|
## Microsoft.Sql/managedInstances
For important additional information, see [Monitoring Agents Overview](../agents
## Microsoft.Synapse/workspaces- |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |BuiltinSqlPoolDataProcessedBytes|No|Data processed (bytes)|Bytes|Total|Amount of data processed by queries|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|IntegrationActivityRunsEnded|No|Activity runs ended|Count|Total|Count of integration activities that succeeded, failed, or were cancelled|Result, FailureType, Activity, ActivityType, Pipeline| |IntegrationPipelineRunsEnded|No|Pipeline runs ended|Count|Total|Count of integration pipeline runs that succeeded, failed, or were cancelled|Result, FailureType, Pipeline| |IntegrationTriggerRunsEnded|No|Trigger Runs ended|Count|Total|Count of integration triggers that succeeded, failed, or were cancelled|Result, FailureType, Trigger|
+|SQLStreamingBackloggedInputEventSources|No|Backlogged input events (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of input events sources backlogged.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingConversionErrors|No|Data conversion errors (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of output events that could not be converted to the expected output schema. Error policy can be changed to 'Drop' to drop events that encounter this scenario.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingDeserializationError|No|Input deserialization errors (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of input events that could not be deserialized.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingEarlyInputEvents|No|Early input events (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of input events which application time is considered early compared to arrival time, according to early arrival policy.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingInputEventBytes|No|Input event bytes (preview)|Count|Total|This is a preview metric available in East US, West Europe. Amount of data received by the streaming job, in bytes. This can be used to validate that events are being sent to the input source.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingInputEvents|No|Input events (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of input events.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingInputEventsSourcesPerSecond|No|Input sources received (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of input events sources per second.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingLateInputEvents|No|Late input events (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of input events which application time is considered late compared to arrival time, according to late arrival policy.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingOutOfOrderEvents|No|Out of order events (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of Event Hub Events (serialized messages) received by the Event Hub Input Adapter, received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingOutputEvents|No|Output events (preview)|Count|Total|This is a preview metric available in East US, West Europe. Number of output events.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingOutputWatermarkDelaySeconds|No|Watermark delay (preview)|Count|Maximum|This is a preview metric available in East US, West Europe. Output watermark delay in seconds.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingResourceUtilization|No|Resource % utilization (preview)|Percent|Maximum|This is a preview metric available in East US, West Europe.
+ Resource utilization expressed as a percentage. High utilization indicates that the job is using close to the maximum allocated resources.|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
+|SQLStreamingRuntimeErrors|No|Runtime errors (preview)|Count|Total|This is a preview metric available in East US, West Europe. Total number of errors related to query processing (excluding errors found while ingesting events or outputting results).|ResourceName, SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance|
## Microsoft.Synapse/workspaces/bigDataPools
azure-monitor Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-baseline.md
If using live streaming APM capabilities, make the channel secure with a secret
- [How to create a Key Vault](../key-vault/secrets/quick-create-portal.md) -- [How to provide Key Vault authentication with a managed identity](/azure/key-vault/general/assign-access=policy-portal)
+- [How to provide Key Vault authentication with a managed identity](/azure/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad)
**Responsibility**: Customer
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
ms.devlang: Previously updated : 03/10/2021 Last updated : 04/17/2021 # What's new in Azure SQL Database & SQL Managed Instance?
The following features are enabled in the SQL Managed Instance deployment model
|Issue |Date discovered |Status |Date resolved | |||||
+|[Changing the connection type does not affect connections through the failover group endpoint](#changing-the-connection-type-does-not-affect-connections-through-the-failover-group-endpoint)|Jan 2021|Has Workaround||
|[Procedure sp_send_dbmail may transiently fail when @query parameter is used](#procedure-sp_send_dbmail-may-transiently-fail-when--parameter-is-used)|Jan 2021|Has Workaround|| |[Distributed transactions can be executed after removing Managed Instance from Server Trust Group](#distributed-transactions-can-be-executed-after-removing-managed-instance-from-server-trust-group)|Oct 2020|Has Workaround|| |[Distributed transactions cannot be executed after Managed Instance scaling operation](#distributed-transactions-cannot-be-executed-after-managed-instance-scaling-operation)|Oct 2020|Has Workaround||
The following features are enabled in the SQL Managed Instance deployment model
|Database mail feature with external (non-Azure) mail servers using secure connection||Resolved|Oct 2019| |Contained databases not supported in SQL Managed Instance||Resolved|Aug 2019|
+### Changing the connection type does not affect connections through the failover group endpoint
+
+If an instance participates in an [auto-failover group](https://docs.microsoft.com/azure/azure-sql/database/auto-failover-group-overview), changing the instance's [connection type](https://docs.microsoft.com/azure/azure-sql/managed-instance/connection-types-overview) does not take effect for the connections established through the failover group listener endpoint.
+
+**Workaround**: Drop and recreate auto-failover group afer changing the connection type.
+ ### Procedure sp_send_dbmail may transiently fail when @query parameter is used Procedure sp_send_dbmail may transiently fail when `@query` parameter is used. When this issue occurs, every second execution of procedure sp_send_dbmail fails with error `Msg 22050, Level 16, State 1` and message `Failed to initialize sqlcmd library with error number -2147467259`. To be able to see this error properly, the procedure should be called with default value 0 for the parameter `@exclude_query_output`, otherwise the error will not be propagated.
azure-sql Elastic Pool Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-manage.md
To create and manage SQL Database elastic pools and pooled databases, use these
|[Elastic pools - Delete](/rest/api/sql/elasticpools/delete)|Deletes the elastic pool.| |[Elastic pools - Get](/rest/api/sql/elasticpools/get)|Gets an elastic pool.| |[Elastic pools - List by server](/rest/api/sql/elasticpools/listbyserver)|Returns a list of elastic pools in a server.|
-|[Elastic pools - Update](/rest/api/sql/2020-11-01-preview/elasticpools/update
-)|Updates an existing elastic pool.|
+|[Elastic pools - Update](/rest/api/sql/2020-11-01-preview/elasticpools/update)|Updates an existing elastic pool.|
|[Elastic pool activities](/rest/api/sql/elasticpoolactivities)|Returns elastic pool activities.| |[Elastic pool database activities](/rest/api/sql/elasticpooldatabaseactivities)|Returns activity on databases inside of an elastic pool.| |[Databases - Create or update](/rest/api/sql/databases/createorupdate)|Creates a new database or updates an existing database.|
azure-sql Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-baseline.md
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
**Guidance**: Azure Active Directory (Azure AD) does not have the concept of default passwords. When provisioning an Azure SQL Database instance, it is recommended that you choose to integrate authentication with Azure AD. -- [How to configure and manage Azure AD authentication with Azure SQL](/azure/sql-database/azure-sql/database/authentication-aad-configure)
+- [How to configure and manage Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure)
**Responsibility**: Customer
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
For additional assistance, see the following resources, which were developed in
||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|- |[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/tree/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.| |[Db2 LUW pure scale on Azure - setup guide](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
azure-sql Performance Guidelines Best Practices Vm Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-vm-size.md
There is typically a trade-off between optimizing for costs and optimizing for p
Review the following checklist for a brief overview of the VM size best practices that the rest of the article covers in greater detail: -- Use VM sizes with 4 or more vCPU like the [Standard_M8-4ms](/../../virtual-machines/m-series), the [E4ds_v4](../../../virtual-machines/edv4-edsv4-series.md#edv4-series), or the [DS12_v2](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) or higher.
+- Use VM sizes with 4 or more vCPU like the [Standard_M8-4ms](/azure/virtual-machines/m-series), the [E4ds_v4](../../../virtual-machines/edv4-edsv4-series.md#edv4-series), or the [DS12_v2](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) or higher.
- Use [memory optimized](../../../virtual-machines/sizes-memory.md) virtual machine sizes for the best performance of SQL Server workloads. -- The [DSv2 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md), [Edsv4](../../../virtual-machines/edv4-edsv4-series.md) series, the [M-](../../../virtual-machines/m-series.md), and the [Mv2-](../../../virtual-machines/mv2-series.md) series offer the optimal memory-to-vCore ratio required for OLTP workloads. Both M series VMs offer the highest memory-to-vCore ratio required for mission critical workloads and are also ideal for data warehouse workloads.
+- The [DSv2 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md), [Edsv4](../../../virtual-machines/edv4-edsv4-series.md) series, the [M-](/azure/virtual-machines/m-series), and the [Mv2-](../../../virtual-machines/mv2-series.md) series offer the optimal memory-to-vCore ratio required for OLTP workloads. Both M series VMs offer the highest memory-to-vCore ratio required for mission critical workloads and are also ideal for data warehouse workloads.
- Consider a higher memory-to-vCore ratio for mission critical and data warehouse workloads. - Leverage the Azure Virtual Machine marketplace images as the SQL Server settings and storage options are configured for optimal SQL Server performance. - Collect the target workload's performance characteristics and use them to determine the appropriate VM size for your business.
The [memory optimized virtual machine sizes](../../../virtual-machines/sizes-mem
### M, Mv2, and Mdsv2 series
-The [M-series](../../../virtual-machines/m-series.md) offers vCore counts and memory for some of the largest SQL Server workloads.
+The [M-series](/azure/virtual-machines/m-series) offers vCore counts and memory for some of the largest SQL Server workloads.
The [Mv2-series](../../../virtual-machines/mv2-series.md) has the highest vCore counts and memory and is recommended for mission critical and data warehouse workloads. Mv2-series instances are memory optimized VM sizes providing unparalleled computational performance to support large in-memory databases and workloads with a high memory-to-CPU ratio that is perfect for relational database servers, large caches, and in-memory analytics.
-The [Standard_M64ms](../../../virtual-machines/m-series.md) has a 28 memory-to-vCore ratio for example.
+The [Standard_M64ms](/azure/virtual-machines/m-series) has a 28 memory-to-vCore ratio for example.
[Mdsv2 Medium Memory series](../../..//virtual-machines/msv2-mdsv2-series.md) is a new M-series that is currently in [preview](https://aka.ms/Mv2MedMemoryPreview) that offers a range of M-series level Azure virtual machines with a midtier memory offering. These machines are well suited for SQL Server workloads with a minimum of 10 memory-to-vCore support up to 30.
The vCPU count can be constrained to one-half to one-quarter of the original VM
These new VM sizes have a suffix that specifies the number of active vCPUs to make them easier to identify.
-For example, the [M64-32ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 32 SQL Server vCores with the memory, I/O, and throughput of the [M64ms](../../../virtual-machines/m-series.md) and the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 16 vCores. Though while the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) has a quarter of the SQL Server licensing cost of the M64ms, the compute cost of the virtual machine will be the same.
+For example, the [M64-32ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 32 SQL Server vCores with the memory, I/O, and throughput of the [M64ms](/azure/virtual-machines/m-series) and the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 16 vCores. Though while the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) has a quarter of the SQL Server licensing cost of the M64ms, the compute cost of the virtual machine will be the same.
> [!NOTE] > - Medium to large data warehouse workloads may still benefit from [constrained vCore VMs](../../../virtual-machines/constrained-vcpu.md), but data warehouse workloads are commonly characterized by fewer users and processes addressing larger amounts of data through query plans that run in parallel.
To learn more, see the other articles in this series:
For security best practices, see [Security considerations for SQL Server on Azure Virtual Machines](security-considerations-best-practices.md).
-Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.md).
+Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.md).
baremetal-infrastructure High Availability Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/high-availability-features.md
Title: High availability features for Oracle on Azure BareMetal
description: Learn about the features available in BareMetal for an Oracle database. Previously updated : 04/15/2021 Last updated : 04/16/2021 # High availability features for Oracle on Azure BareMetal
Oracle offers many features to build a resilient platform for running Oracle dat
## Flashback Database
-The [Flashback Database](https://docs.oracle.com/en/database/oracle/oracle-database/21/rcmrf/FLASHBACK-DATABASE.html#GUID-584AC79A-40C5-45CA-8C63-DED3BE3A4511) feature comes in Oracle Database Enterprise Edition. It rewinds the database to a specific point in time. This feature is distinct from a [Recovery Manager (RMAN)](https://docs.oracle.com/en/cloud/paas/db-backup-cloud/csdbb/performing-general-restore-and-recovery-operations.html) point-in-time recovery in that it rewinds from the current point in time, rather than forward-winds after a restore. It results in much faster completion times.
+The [Flashback Database](https://docs.oracle.com/en/database/oracle/oracle-database/21/rcmrf/FLASHBACK-DATABASE.html#GUID-584AC79A-40C5-45CA-8C63-DED3BE3A4511) feature comes in Oracle Database Enterprise Edition. Flashback Database rewinds the database to a specific point in time. This feature differs from a [Recovery Manager (RMAN)](https://docs.oracle.com/en/cloud/paas/db-backup-cloud/csdbb/performing-general-restore-and-recovery-operations.html) point-in-time recovery in that it rewinds from the current time, rather than forward-winds after a restore. The result is that Flashback Database gives much faster completion times.
You can use this feature alongside [Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/preface.html#GUID-B6209E95-9DA8-4D37-9BAD-3F000C7E3590). Flashback Database allows a database administrator to reinstantiate a failed database back into a Data Guard configuration without a full RMAN restore and recovery. This feature allows you to restore disaster recovery capability (and any offloaded reporting and backup benefits with Active Data Guard) much faster.
-You can use this feature instead of a time-delayed redo on the standby database. A standby database can be flashed back to a point prior to a problem.
+You can use this feature instead of a time-delayed redo on the standby database. A standby database can be flashed back to a point before the problem arose.
The Oracle Database keeps flashback logs in the fast recovery area (FRA). These logs are separate from the redo logs and require more space within the FRA. By default, 24 hours of flashback logs are kept, but you can change this setting per your requirements.
As shown in the following figure from Oracle's [High Availability Overview and B
If one instance fails, the service continues on all other remaining instances. Each database deployed on the solution will be in a RAC configuration of n+1, where n is the minimum processing power required to support the service.
-Oracle Database services are used to allow connections to fail over between nodes when an instance fails transparently. Such failures may be planned or unplanned. Working with the application (fast application notification events), when an instance is made unavailable, the service is relocated to a surviving node. The service moves to a node specified in the service configuration as either preferred or available.
+Oracle Database services are used to allow connections to fail over between nodes when an instance fails transparently. Such failures may be planned or unplanned. Working with the application (fast application notification events), when an instance is made unavailable, the service is moved to a surviving node. The service moves to a node specified in the service configuration as either preferred or available.
Another key feature of Oracle Database services is only starting a service depending on its role. This feature is used when there is a Data Guard failover. All patterns deployed using Data Guard are required to link a database service to a Data Guard role.
For example, two services could be created, MY\_DB\_APP and MY\_DB\_AS. The MY\_
## Oracle Data Guard
-With Data Guard, you can maintain an identical copy of a database on separate physical hardware. Ideally, the hardware should be geographically separated. Data Guard places no limit on the distance, although distance has a bearing on modes of protection. Increased distance adds latency between sites, which can cause some options (such as synchronous replication) to no longer be viable.
+With Data Guard, you can maintain an identical copy of a database on separate physical hardware. Ideally, that hardware should be geographically removed from the primary database. Data Guard places no limit on the distance, although distance has a bearing on modes of protection. Increased distance adds latency between sites, which can cause some options (such as synchronous replication) to no longer be viable.
Data Guard offers advantages over storage-level replication: - As the replication is database-aware, only relevant traffic is replicated. - Certain workloads can generate high input/output on temporary tablespaces, which aren't required on standby and so aren't replicated.-- Validation on the replicated blocks occurs at the standby database, ensuring that physical corruptions introduced on the primary database aren't replicated to the standby database.
+- Validation on the replicated blocks occurs at the standby database, so physical corruptions on the primary database aren't replicated to the standby database.
- Prevents logical intra-block corruptions and lost-write corruptions. It also eliminates the risk of mistakes made by storage administrators from replicating to the standby. Redo can be delayed for a pre-determined period, so user errors aren't immediately replicated to the standby. ## Azure NetApp Files snapshots
-The NetApp Files storage solution used in BareMetal allows you to create snapshots of volumes. Snapshots allow you to revert a filesystem to a specific point in time quickly. Snapshot technologies allow recovery time objective (RTO) times that are only a fraction of the time associated with restoring a database backup.
+The NetApp Files storage solution used in BareMetal allows you to create snapshots of volumes. Snapshots allow you to revert a file system to a specific point in time quickly. Snapshot technologies allow recovery time objective (RTO) times that are a fraction of the time needed to restore a database backup.
-Snapshot functionality for Oracle databases is available through Azure NetApp SnapCenter. SnapCenter allows you to schedule and automate volume snapshot creation and restoration.
+Snapshot functionality for Oracle databases is available through Azure NetApp SnapCenter. SnapCenter enables snapshots for backup, SnapVault gives you offline vaulting, and Snap Clone enables self-service restore and other operations.
## Recovery Manager
Recovery Manager (RMAN) is the preferred utility for taking physical database ba
RMAN allows you to take hot or cold database backups. You can use these backups to create standby databases or to duplicate databases to clone environments. RMAN also has a restore validation function. This function reads a backup set and determines whether you can use it to recover the database to a specific point in time.
-Because RMAN is an Oracle-provided utility, it can read the internal structure of database files. This allows you to run physical and logical corruption checks during backup and restore operations. You can also recover database datafiles, and restore individual datafiles and tablespaces to a specific point in time. These are advantages RMAN offers over storage snapshots. RMAN backups provide a last line of defense against full data loss when you can't use snapshots.
+Because RMAN is an Oracle-provided utility, it reads the internal structure of database files. This allows you to run physical and logical corruption checks during backup and restore operations. You can also recover database datafiles, and restore individual datafiles and tablespaces to a specific point in time. These are advantages RMAN offers over storage snapshots. RMAN backups provide a last line of defense against full data loss when you can't use snapshots.
## Next steps
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview.md
No customer configuration is necessary to enable zone-resiliency. Zone-resilienc
## Next steps * [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](quickstarts/client-libraries.md)
-* The Anomaly Detector API [online demo](https://notebooks.azure.com/AzureAnomalyDetection/projects/anomalydetector)
+* The Anomaly Detector API [online demo](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook)
* The Anomaly Detector [REST API reference](https://aka.ms/anomaly-detector-rest-api-ref)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
The [bookmark element](speech-synthesis-markup.md#bookmark-element) allows you t
- **C++/C#/Java/Python**: Moved to the latest version of GStreamer (1.18.3) to add support for transcribing any media format on Windows, Linux and Android. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams). - **C++/C#/Java/Objective-C/Python**: Added support for decoding compressed TTS/synthesized audio to the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.propertyid?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.propertyid?view=azure-java-stable), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxpropertyid), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid?view=azure-python).-- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest#fromWavFileInput_File_). This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-JavaScript/issues/252).
+- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest#fromWavFileInput_File_). This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/252).
- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-dotnet#methods), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-java-stable#methods), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoiceasync), and [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer?view=azure-python#methods). - **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme). - **C++/C#/Java/JavaScript/Objective-C/Python**: Added `BookmarkReached` event for TTS. You can set bookmarks in the input SSML and get the audio offsets for each bookmark. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup#bookmark-element).
container-instances Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/security-baseline.md
To see how Container Instances completely maps to the Azure Security Benchmark,
Control outbound network access from a subnet delegated to Azure Container Instances by using Azure Firewall. -- [Deploy container instances into an Azure virtual network](/azure/container-instances/container-instance-vnet)
+- [Deploy container instances into an Azure virtual network](/azure/container-instances/container-instances-vnet)
- [How to deploy and configure Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md)
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
description: Learn how to configure role-based access control with Azure Active
Previously updated : 03/30/2021 Last updated : 04/16/2021
The Azure Cosmos DB data plane RBAC is built on concepts that are commonly found
## <a id="permission-model"></a> Permission model
+> [!IMPORTANT]
+> This permission model only covers database operations that let you read and write data. It does **not** cover any kind of management operations, like creating containers or changing their throughput. This means that you **cannot use any Azure Cosmos DB data plane SDK** to authenticate management operations with an AAD identity. Instead, you must use [Azure RBAC](role-based-access-control.md) through:
+> - [ARM templates](manage-with-templates.md)
+> - [Azure PowerShell scripts](manage-with-powershell.md),
+> - [Azure CLI scripts](manage-with-cli.md),
+> - [Azure management libraries](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html).
+ The table below lists all the actions exposed by the permission model. | Name | Corresponding database operation(s) |
Wildcards are supported at both *containers* and *items* levels:
- `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*` - `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*`
-> [!IMPORTANT]
-> This permission model only covers database operations that let you read and write data. It does **not** cover any kind of management operations, like creating containers or changing their throughput. To authenticate management operations with an AAD identity, use [Azure RBAC](role-based-access-control.md) instead.
- ### <a id="metadata-requests"></a> Metadata requests When using Azure Cosmos DB SDKs, these SDKs issue read-only metadata requests during initialization and to serve specific data requests. These metadata requests fetch various configuration details such as:
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/managed-identity-based-authentication.md Binary files differ
data-factory Wrangling Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-functions.md
Previously updated : 01/19/2021 Last updated : 04/16/2021 # Transformation functions in Power Query for data wrangling
The following M functions add or transform columns: [Table.AddColumn](/powerquer
* Numeric arithmetic * Text concatenation
-* Date andTime Arithmetic (Arithmetic operators, [Date.AddDays](/powerquery-m/date-adddays), [Date.AddMonths](/powerquery-m/date-addmonths), [Date.AddQuarters](/powerquery-m/date-addquarters), [Date.AddWeeks](/powerquery-m/date-addweeks), [Date.AddYears](/powerquery-m/date-addyears))
+* Date and Time Arithmetic (Arithmetic operators, [Date.AddDays](/powerquery-m/date-adddays), [Date.AddMonths](/powerquery-m/date-addmonths), [Date.AddQuarters](/powerquery-m/date-addquarters), [Date.AddWeeks](/powerquery-m/date-addweeks), [Date.AddYears](/powerquery-m/date-addyears))
* Durations can be used for date and time arithmetic, but must be transformed into another type before written to a sink (Arithmetic operators, [#duration](/powerquery-m/sharpduration), [Duration.Days](/powerquery-m/duration-days), [Duration.Hours](/powerquery-m/duration-hours), [Duration.Minutes](/powerquery-m/duration-minutes), [Duration.Seconds](/powerquery-m/duration-seconds), [Duration.TotalDays](/powerquery-m/duration-totaldays), [Duration.TotalHours](/powerquery-m/duration-totalhours), [Duration.TotalMinutes](/powerquery-m/duration-totalminutes), [Duration.TotalSeconds](/powerquery-m/duration-totalseconds)) * Most standard, scientific, and trigonometric numeric functions (All functions under [Operations](/powerquery-m/number-functions#operations), [Rounding](/powerquery-m/number-functions#rounding), and [Trigonometry](/powerquery-m/number-functions#trigonometry) *except* Number.Factorial, Number.Permutations, and Number.Combinations) * Replacement ([Replacer.ReplaceText](/powerquery-m/replacer-replacetext), [Replacer.ReplaceValue](/powerquery-m/replacer-replacevalue), [Text.Replace](/powerquery-m/text-replace), [Text.Remove](/powerquery-m/text-remove))
Keep and Remove Top, Keep Range (corresponding M functions,
| Row level error handling | Row level error handling is currently not supported. For example, to filter out non-numeric values from a column, one approach would be to transform the text column to a number. Every cell which fails to transform will be in an error state and need to be filtered. This scenario isn't possible in scaled-out M. | | Table.Transpose | Not supported | | Table.Pivot | Not supported |
+| Table.SplitColumn | Partially supported |
+
+## M script workarounds
+
+### For ```SplitColumn``` there is an alternate for split by length and by position
+
+* Table.AddColumn(Source, "First characters", each Text.Start([Email], 7), type text)
+* Table.AddColumn(#"Inserted first characters", "Text range", each Text.Middle([Email], 4, 9), type text)
+
+This option is accessible from the Extract option in the ribbon
+
+![Power Query Add Column](media/wrangling-data-flow/pq-split.png)
+
+### For ```Table.CombineColumns```
+
+* Table.AddColumn(RemoveEmailColumn, "Name", each [FirstName] & " " & [LastName])
+ ## Next steps
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-on-the-cloud.md
Title: Manage sensors and subscriptions in the Defender for IoT portal
+ Title: Manage sensors in the Defender for IoT portal
description: Learn how to onboard, view, and manage sensors in the Defender for IoT portal. Previously updated : 2/18/2021 Last updated : 4/18/2021
-# Manage sensors and subscriptions in the Defender for IoT portal
+# Manage sensors ain the Defender for IoT portal
This article describes how to onboard, view, and manage sensors in the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
To reactivate a sensor:
9. Select **Activate**.
-## Offboard a subscription
-
-Subscriptions are managed on a monthly basis. When you offboard a subscription, you will be billed for that subscription until the end of the month.
-
-Uninstall all sensors that are associated with the subscription prior to offboarding the subscription. For more information on how to delete a sensor, see [Delete a sensor](#delete-a-sensor).
-
-To offboard a subscription:
-
-1. Navigate to the **Pricing** page.
-1. Select the subscription, and then select the **delete** icon :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/delete-icon.png" border="false":::.
-1. In the confirmation popup, select the checkbox to confirm you have deleted all sensors associated with the subscription.
-
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/offboard-popup.png" alt-text="Select the checkbox and select offboard to offboard your sensor.":::
-
-1. Select the **Offboard** button.
-
-The on-premises environment is not affected, but you should uninstall the sensor from the on-premises environment, or reassign the sensor to another subscription, so as to prevent any related data from flowing to the on-premises management console.
- ## Next steps [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)
iot-central Concepts App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-app-templates.md
Title: What are application templates in Azure IoT Central | Microsoft Docs description: Azure IoT Central application templates allow you to jump in to IoT solution development.--++ Last updated 12/19/2020
# What are application templates?
-Application templates in Azure IoT Central are a tool to help solution builders kickstart their IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing and your application for resale to your customers.
+Application templates in Azure IoT Central are a tool to help solution builders kickstart their IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
Application templates consist of:
You choose the application template when you create your application. You can't
## Custom templates
-If you want to create your application from scratch, choose one of the **Custom application** template.
+If you want to create your application from scratch, choose one of the **Custom application** templates.
## Industry focused templates
Azure IoT Central is an industry agnostic application platform. Application temp
- [Healthcare](../healthcare/overview-iot-central-healthcare.md). - Continuous patient monitoring
-## Application versions
-
-Templates are associated with specific IoT Central application versions. You can find the version of an application on the [About your app](./howto-get-app-info.md) page from the **Help** link.
- ## Next steps Now that you know what IoT Central application templates are, get started by [creating an IoT Central Application](quick-deploy-iot-central.md).
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-portal.md
Title: Manage IoT Central from the Azure portal | Microsoft Docs
description: This article describes how to create and manage your IoT Central applications from the Azure portal. -- Previously updated : 02/11/2020++ Last updated : 04/17/2021
[!INCLUDE [iot-central-selector-manage](../../../includes/iot-central-selector-manage.md)]
-Instead of creating and managing IoT Central applications on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, you can use the [Azure portal](https://portal.azure.com) to manage your applications.
+You can use the [Azure portal](https://portal.azure.com) to create and manage IoT Central applications, similar to the functionality in IoT Central's [application manager](https://apps.azureiotcentral.com/myapps).
## Create IoT Central applications [!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
+To create an application, navigate to the [Create IoT Central Application](https://ms.portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal and fill in the form.
-To create an application, navigate to the [Azure portal](https://ms.portal.azure.com) and select **Create a resource**.
-
-In **Search the Marketplace** bar, type *IoT Central*:
-
-![Management portal: search](media/howto-manage-iot-central-from-portal/image0a1.png)
-
-Select the **IoT Central Application** tile in the search results:
-
-![Management Portal: search results](media/howto-manage-iot-central-from-portal/image0b1.png)
-
-Now, select **Create**:
+![Create IoT Central form](media/howto-manage-iot-central-from-portal/image6a.png)
-![Management portal: IoT Central resource](media/howto-manage-iot-central-from-portal/image0c1.png)
+* **Resource name** is a unique name you can choose for your IoT Central application in your Azure resource group.
-Fill in all the fields in the form. This form is similar to the form you fill out to create applications on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website. For more information, see the [Create an IoT Central application](quick-deploy-iot-central.md) quickstart.
+* **Application URL** is the URL you can use to access your application.
-![Create IoT Central form](media/howto-manage-iot-central-from-portal/image6a.png)
+* **Location** is the [geography](https://azure.microsoft.com/global-infrastructure/geographies/) where you'd like to create your application. Typically, you should choose the location that's physically closest to your devices to get optimal performance. Azure IoT Central is currently available in the following locations:
+ * Asia Pacific
+ * Australia
+ * Europe
+ * Japan
+ * United Kingdom
+ * United States
-**Location** is the [geography](https://azure.microsoft.com/global-infrastructure/geographies/) where you'd like to create your application. Typically, you should choose the location that's physically closest to your devices to get optimal performance. Azure IoT Central is currently available in the **Australia**, **Asia Pacific**, **Europe**, **United States**, **United Kingdom**, and **Japan** geographies. Once you choose a location, you can't move your application to a different location later.
+ Once you choose a location, you can't move your application to a different location later.
-After filling out all fields, select **Create**.
+After filling out all fields, select **Create**. For more information, see the [Create an IoT Central application](quick-deploy-iot-central.md) quickstart.
## Manage existing IoT Central applications
-If you already have an Azure IoT Central application you can delete it, or move it to a different subscription or resource group in the Azure portal.
+If you already have an Azure IoT Central application, you can delete it, or move it to a different subscription or resource group in the Azure portal.
> [!NOTE] > Applications created using the *free* plan do not require an Azure subscriptions, and therefore you won't find them listed in your Azure subscription on the Azure portal. You can only see and manage free apps from the IoT Central portal.
-To get started, select **All resources** in the portal. Select **Show hidden types** and start typing the name of your application in **Filter by name** to find it. Then select the IoT Central application you'd like to manage.
+To get started, search for your application in the search bar at the top of the Azure portal. You can also view all your applications by searching for "IoT Central Applications" and selecting the service:
+
+![Screenshot that shows the search results for "IoT Central Applications" with the first service selected.](media/howto-manage-iot-central-from-portal/search-iot-central.png)
-To navigate to the application, select the **IoT Central Application URL**:
+Once you select an application in the search results, the Azure portal shows you its overview. You can navigate to the actual application by selecting the **IoT Central Application URL**:
![Screenshot that shows the "Overview" page with the "IoT Central Application URL" highlighted.](media/howto-manage-iot-central-from-portal/image3.png)
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-users-roles.md
Title: Manage users and roles in Azure IoT Central application | Microsoft Docs description: As an administrator, how to manage users and roles in your Azure IoT Central application-- Previously updated : 12/05/2019++ Last updated : 04/16/2021
# Manage users and roles in your IoT Central application
-This article describes how, as an administrator, you can add, edit, and delete users in your Azure IoT Central application. The article also describes how to manage roles in your Azure IoT Central application.
-
-To access and use the **Administration** section, you must be in the **Administrator** role for an Azure IoT Central application. If you create an Azure IoT Central application, you're automatically added to the **Administrator** role for that application.
+This article describes how, as an administrator, you can add, edit, and delete users in your Azure IoT Central application. The article also describes how to manage roles in your application.
## Add users
-Every user must have a user account before they can sign in and access an Azure IoT Central application. Microsoft Accounts and Azure Active Directory accounts are supported in Azure IoT Central. Azure Active Directory groups aren't currently supported in Azure IoT Central.
+Every user must have a user account before they can sign in and access an application. IoT Central currently supports Microsoft accounts and Azure Active Directory accounts, but not Azure Active Directory groups.
For more information, see [Microsoft account help](https://support.microsoft.com/products/microsoft-account?category=manage-account) and [Quickstart: Add new users to Azure Active Directory](../../active-directory/fundamentals/add-users-azure-active-directory.md).
For more information, see [Microsoft account help](https://support.microsoft.com
> [!NOTE] > A user who is in a custom role that grants them the permission to add other users, can only add users to a role with same or fewer permissions than their own role.-
-If an IoT Central user ID is deleted from Azure Active Directory and then readded, the user won't be able to sign in the IoT Central application. To re-enable access, the IoT Central administrator should delete and readd the user in the application.
+ >
+ > If a user is deleted from Azure Active Directory and then added back, they won't be able to sign into the IoT Central application automatically. To re-enable access, the application's administrator should delete and re-add the user in the application as well.
### Edit the roles that are assigned to users Roles can't be changed after they're assigned. To change the role that's assigned to a user, delete the user, and then add the user again with a different role. > [!NOTE]
-> The roles assigned are specific to IoT Central application and cannot be managed from the Azure Portal.
+> The roles assigned are specific to the IoT Central application and cannot be managed from the Azure Portal.
## Delete users
Users in the **Operator** role can monitor device health and status. They aren't
## Create a custom role
-If your solution requires finer-grained access controls, you can create custom roles with custom sets of permissions. To create a custom role, navigate to the **Roles** page in the **Administration** section of your application. Then select **+ New role**, and add a name and description for your role. Select the permissions your role requires and then select **Save**.
+If your solution requires finer-grained access controls, you can create roles with custom sets of permissions. To create a custom role, navigate to the **Roles** page in the **Administration** section of your application. Then select **+ New role**, and add a name and description for your role. Select the permissions your role requires and then select **Save**.
You can add users to your custom role in the same way that you add users to a built-in role.
You can add users to your custom role in the same way that you add users to a bu
### Custom role options
-When you define a custom role, you choose the set of permissions that a user is granted if they're a member of the role. Some permissions are dependent on others. For example, if you add the **Update application dashboards** permission to a role, the **View application dashboards** permission is automatically added. The following tables summarize the available permissions, and their dependencies, you can use when creating custom roles.
+When you define a custom role, you choose the set of permissions that a user is granted if they're a member of the role. Some permissions are dependent on others. For example, if you add the **Update application dashboards** permission to a role, you also need the **View application dashboards** permission. The following tables summarize the available permissions, and their dependencies, you can use when creating custom roles.
#### Managing devices
When you define a custom role, you choose the set of permissions that a user is
| Update | View <br/> Other dependencies: View device templates and device groups | | Create | View <br/> Other dependencies: View device templates and device groups | | Delete | View <br/> Other dependencies: View device templates and device groups |
-| Execute Commands | Update, View <br/> Other dependencies: View device templates and device groups |
-| Full Control | View, Update, Create, Delete, Execute Commands <br/> Other dependencies: View device templates and device groups |
+| Execute commands | Update, View <br/> Other dependencies: View device templates and device groups |
+| View raw data | View <br/> Other dependencies: View device templates and device groups |
+| Full Control | View, Update, Create, Delete, Execute commands, View raw data <br/> Other dependencies: View device templates and device groups |
**Device groups permissions**
When you define a custom role, you choose the set of permissions that a user is
| View | None <br/> Other dependencies: View device templates and device instances | | Update | View <br/> Other dependencies: View device templates and device instances | | Create | View, Update <br/> Other dependencies: View device templates and device instances |
-| Delete | View <br/> Other dependencies: View device templates and device instances |
+| Delete | View <br/> Other dependencies: View device templates and device instances |
| Full Control | View, Update, Create, Delete <br/> Other dependencies: View device templates and device instances | **Device connectivity management permissions**
When you define a custom role, you choose the set of permissions that a user is
| Name | Dependencies | | - | -- | | Read instance | None <br/> Other dependencies: View device templates, device groups, device instances |
-| Manage instance | None |
+| Manage instance | Read instance <br /> Other dependencies: View device templates, device groups, device instances |
| Read global | None |
-| Manage global | Read Global |
-| Full Control | Read instance, Manage instance, Read global, Manage global. <br/> Other dependencies: View device templates, device groups, device instances |
+| Manage global | Read global |
+| Full Control | Read instance, Manage instance, Read global, Manage global <br/> Other dependencies: View device templates, device groups, device instances |
**Jobs permissions**
When you define a custom role, you choose the set of permissions that a user is
| Export | View <br/> Other dependencies: View device templates, device instances, device groups, dashboards, data export, branding, help links, custom roles, rules | | Full Control | View, Export <br/> Other dependencies: View device templates, device groups, application dashboards, data export, branding, help links, custom roles, rules |
+**Device file upload permissions**
+
+| Name | Dependencies |
+| - | -- |
+| View | None |
+| Manage | View |
+| Full Control | View, Manage |
+ **Billing permissions** | Name | Dependencies |
When you define a custom role, you choose the set of permissions that a user is
| Name | Dependencies | | - | -- |
-| View | None |
-| Create | View |
-| Delete | View |
-| Full Control | View, Create, Delete |
+| View | None <br/> Other dependencies: View custom roles |
+| Create | View <br/> Other dependencies: View custom roles |
+| Delete | View <br/> Other dependencies: View custom roles |
+| Full Control | View, Create, Delete <br/> Other dependencies: View custom roles |
## Next steps
-Now that you've learned about how to manage users and roles in your Azure IoT Central application, the suggested next step is to learn how to [Manage your bill](howto-view-bill.md).
+Now that you've learned how to manage users and roles in your IoT Central application, the suggested next step is to learn how to [Manage your bill](howto-view-bill.md).
iot-central Howto Monitor Application Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-monitor-application-health.md
Metrics may differ from the numbers shown on your Azure IoT Central invoice. Thi
- IoT Central [standard pricing plans](https://azure.microsoft.com/pricing/details/iot-central/) include two devices and varying message quotas for free. While the free items are excluded from billing, they're still counted in the metrics. -- IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. Solution builders may choose to [validate their device templates](./overview-iot-central.md#connect-devices) before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
+- IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. Solution builders may choose to validate their device templates before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
- While metrics may show a subset of device-to-cloud communication, all communication between the device and the cloud [counts as a message for billing](https://azure.microsoft.com/pricing/details/iot-central/).
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-tour.md
Title: Take a tour of the Azure IoT Central UI | Microsoft Docs
-description: Become familiar with the key areas of the Azure IoT Central UI that you use to create, manage and use your IoT solution.
--
+description: Become familiar with the key areas of the Azure IoT Central UI that you use to create, manage, and use your IoT solution.
++ Last updated 02/09/2021
# Take a tour of the Azure IoT Central UI
-This article introduces you to the Microsoft Azure IoT Central UI. You can use the UI to create, manage, and use an Azure IoT Central solution and its connected devices.
+This article introduces you to Azure IoT Central UI. You can use the UI to create, manage, and use an IoT Central application and its connected devices.
## IoT Central homepage
You launch your IoT Central application by navigating to the URL you chose durin
## Navigate your application
-Once you're inside your IoT application, use the left pane to access the different areas. You can expand or collapse the left pane by selecting the three-lined icon on top of the pane:
+Once you're inside your IoT application, use the left pane to access various features. You can expand or collapse the left pane by selecting the three-lined icon on top of the pane:
> [!NOTE] > The items you see in the left pane depend on your user role. Learn more about [managing users and roles](howto-manage-users-roles.md).
Once you're inside your IoT application, use the left pane to access the differe
:::column span="2"::: **Dashboards** displays all application and personal dashboards.
- **Devices** enables you to manage your connected devices - real and simulated.
+ **Devices** enables you to manage all your devices.
**Device groups** lets you view and create collections of devices specified by a query. Device groups are used through the application to perform bulk operations.
- **Rules** enables you to create and edit rules to monitor your devices. Rules are evaluated based on device telemetry and trigger customizable actions.
+ **Rules** enables you to create and edit rules to monitor your devices. Rules are evaluated based on device data and trigger customizable actions.
- **Analytics** lets you view telemetry from your devices graphically.
+ **Analytics** exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices.
**Jobs** enables you to manage your devices at scale by running bulk operations.
- **Device templates** is where you create and manage the characteristics of the devices that connect to your application.
+ **Device templates** enables you to create and manage the characteristics of devices that connect to your application.
- **Data export** enables you to configure a continuous export to external services - such as storage and queues.
+ **Data export** enables you to configure a continuous export to external services such as storage and queues.
- **Administration** is where you can manage your application's settings, customization, billing, users, and roles.
+ **Administration** lets you manage your application's settings, customization, billing, users, and roles.
+
+ **My apps** lets you jump back to the IoT Central app manager.
:::column-end::: :::row-end:::
You can choose between a light theme or a dark theme for the UI:
:::image type="content" source="Media/overview-iot-central-tour/dashboard.png" alt-text="Screenshot of IoT Central Dashboard.":::
-* The dashboard is the first page you see when you sign in to your Azure IoT Central application. You can create and customize multiple application dashboards. Learn more about [adding tiles to your dashboard](howto-add-tiles-to-your-dashboard.md)
+* Dashboard is the first page you see when you sign in to your IoT Central application. You can create and customize multiple application dashboards. Learn more about [adding tiles to your dashboard](howto-add-tiles-to-your-dashboard.md)
* Personal dashboards can also be created to monitor what you care about. To learn more, see the [Create Azure IoT Central personal dashboards](howto-create-personal-dashboards.md) how-to article.
You can choose between a light theme or a dark theme for the UI:
:::image type="content" source="Media/overview-iot-central-tour/devices.png" alt-text="Screenshot of Devices Page.":::
-The explorer page shows the _devices_ in your Azure IoT Central application grouped by _device template_.
+This page shows the devices in your IoT Central application grouped by _device template_.
* A device template defines a type of device that can connect to your application. * A device represents either a real or simulated device in your application.
To learn more, see the [Monitor your devices](./quick-monitor-devices.md) quicks
:::image type="content" source="Media/overview-iot-central-tour/device-groups.png" alt-text="Device Group page":::
-Device group are a collection of related devices. You use device groups to perform bulk operations in your application. To learn more, see the [Use device groups in your Azure IoT Central application](tutorial-use-device-groups.md) article.
+This page lets you create and view device groups in your IoT Central application. You can use device groups to do bulk operations in your application or to analyze data. To learn more, see the [Use device groups in your Azure IoT Central application](tutorial-use-device-groups.md) article.
### Rules :::image type="content" source="Media/overview-iot-central-tour/rules.png" alt-text="Screenshot of Rules Page.":::
-The rules page lets you define rules based on devices' telemetry, state, or events. When a rule fires, it can trigger one or more actions - such as sending an email, notify an external system via webhook alerts, etc. To learn, see the [Configuring rules](tutorial-create-telemetry-rules.md) tutorial.
+This page lets you view and create rules based on device data. When a rule fires, it can trigger one or more actions such as send an email or invoke a webhook. To learn, see the [Configuring rules](tutorial-create-telemetry-rules.md) tutorial.
### Analytics :::image type="content" source="Media/overview-iot-central-tour/analytics.png" alt-text="Screenshot of Analytics page.":::
-The analytics page lets you view telemetry from your devices graphically, across a time series. To learn more, see the [Create analytics for your Azure IoT Central application](howto-create-analytics.md) article.
+Analytics exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices. To learn more, see the [Create analytics for your Azure IoT Central application](howto-create-analytics.md) article.
### Jobs :::image type="content" source="Media/overview-iot-central-tour/jobs.png" alt-text="Jobs Page":::
-The jobs page lets you run bulk operations on your devices. You can update device properties, settings, and execute commands against device groups. To learn more, see the [Run a job](howto-run-a-job.md) article.
+This page lets you view and create jobs that can be used for bulk device management operations on your devices. You can update device properties, settings, and execute commands against device groups. To learn more, see the [Run a job](howto-run-a-job.md) article.
### Device templates :::image type="content" source="Media/overview-iot-central-tour/templates.png" alt-text="Screenshot of Device Templates.":::
-The device templates page is where you create and manage the device templates in the application. A device template specifies devices characteristics such as:
-
-* Telemetry, state, and event measurements
-* Properties
-* Commands
-* Views
-
-To learn more, see the [Define a new device type in your Azure IoT Central application](howto-set-up-template.md) tutorial.
+The device templates page is where you can view and create device templates in the application. To learn more, see the [Define a new device type in your Azure IoT Central application](howto-set-up-template.md) tutorial.
### Data export
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
Title: What is Azure IoT Central | Microsoft Docs
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions and helps to reduce the burden and cost of IoT management operations, and development. This article provides an overview of the features of Azure IoT Central.
-- Previously updated : 11/23/2020
+description: IoT Central is a hosted IoT app platform that's secure, scales with you as your business grows, and integrates with your existing business apps. This article provides an overview of the features of Azure IoT Central.
++ Last updated : 04/16/2021
# What is Azure IoT Central?
-IoT Central is an IoT application platform that reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. Choosing to build with IoT Central gives you the opportunity to focus time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
+IoT Central is a hosted IoT app platform that's secure, scales with you as your business grows, and integrates with your existing business apps. Choosing to build with IoT Central gives you the opportunity to focus time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
-The web UI lets you quickly connect devices, monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications.
-
-This article outlines, for IoT Central:
--- The typical user roles associated with a project.-- How to create your application.-- How to connect your devices to your application-- How to manage your application.-- Azure IoT Edge capabilities in IoT Central.-- How to connect your Azure IoT Edge runtime powered devices to your application.-
-## User roles
-
-The IoT Central documentation refers to four user roles that interact with an IoT Central application:
--- A _solution builder_ is responsible for [creating an application](quick-deploy-iot-central.md), [configuring rules and actions](quick-configure-rules.md), [defining integrations with other services](howto-export-data.md), and further customizing the application for operators and device developers.-- An _operator_ [manages the devices](howto-manage-devices.md) connected to the application.-- An _administrator_ is responsible for administrative tasks such as managing [user roles and permissions](howto-administer.md) within the application.-- A _device developer_ [creates the code that runs on a device](concepts-telemetry-properties-commands.md) or [IoT Edge module](concepts-iot-edge.md) connected to your application.
+IoT Central lets you quickly connect devices, monitor device conditions, create rules, and manage millions of devices and their data throughout their lifecycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications.
## Create your IoT Central application
-You can quickly deploy a new IoT Central application and then customize it to your specific requirements. Start with a generic _application template_ or with one of the industry-focused application templates for [Retail](../retail/overview-iot-central-retail.md), [Energy](../energy/overview-iot-central-energy.md), [Government](../government/overview-iot-central-government.md), or [Healthcare](../healthcare/overview-iot-central-healthcare.md).
-
-See the [Create a new application](quick-deploy-iot-central.md) quickstart for a walk through of how to create your first application.
-
-## Connect devices
-
-After creating your application, the first step is to create an connect devices. Every device connected to IoT Central uses a _device template_. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
--- Telemetry it sends. Examples include temperature and humidity. Telemetry is streaming data.-- Business properties that an operator can modify. Examples include a customer address and a last serviced date.-- Device properties that are set by a device and are read-only in the application. For example, the state of a valve as either open or shut.-- Properties, that an operator sets, that determine the behavior of the device. For example, a target temperature for the device.-- Commands, that an operator can call, that run on a device. For example, a command to remotely reboot a device.-
-Every [device template](howto-set-up-template.md) includes:
+You can quickly create a new IoT Central application and then customize it to your unique requirements. You can either start with a generic _application template_ or with one of the industry-focused application templates for [Retail](../retail/overview-iot-central-retail.md), [Energy](../energy/overview-iot-central-energy.md), [Government](../government/overview-iot-central-government.md), or [Healthcare](../healthcare/overview-iot-central-healthcare.md).
-- A _device model_ describing the capabilities a device should implement. The device capabilities include:
+See the [create a new application](quick-deploy-iot-central.md) quickstart for a walk-through of how to create your first application.
- - The telemetry it streams to IoT Central.
- - The read-only properties it uses to report state to IoT Central.
- - The writable properties it receives from IoT Central to set device state.
- - The commands called from IoT Central.
+## Connect your devices
+After creating your application, the first step is to connect your devices. See the [device development overview](./overview-iot-central-developer.md) for an introduction to connecting devices to your IoT Central application.
-- Cloud properties that aren't stored on the device.-- Customizations, dashboards, and forms that are part of your IoT Central application.
+### Device templates
-You have several options for creating device templates:
+Devices in IoT Central are associated with a _device template_. A device template is like a blueprint: it defines the characteristics and behaviors of your devices, such as:
-- Design the device template in IoT Central and then implement its device model in your device code.-- Create a device model using Visual Studio code and publish the model to a repository. Implement your device code from the model, and connect your device to your IoT Central application. IoT Central finds the device model from the repository and creates a simple device template for you.-- Create a device model using Visual Studio code. Implement your device code from the model. Manually import the device model into your IoT Central application and then add any cloud properties, customizations, and dashboards your IoT Central application needs.
+- Telemetries, which represent measurements from sensors, for example temperature or humidity.
+- Properties, which represent the durable state of a device. Examples include state of a coolant pump or target temperature for a device. You can declare properties as read-only or writable. Only devices can update the value of a read-only property. An operator can set the value of a writable property to send to a device.
+- Commands, operations that can be triggered on a device, for example, a command to remotely reboot a device.
+- Cloud properties, which are device metadata to store in the IoT Central application, for example customer address or last serviced date.
-See the [Add a simulated device](quick-create-simulated-device.md) quickstart for a walk through of how to create and connect your first device.
+See the [create a device template](howto-set-up-template.md) article to learn more.
### Customize the UI
-You can also customize the IoT Central application UI for the operators who are responsible for the day-to-day use of the application. Customizations you can make include:
+You can customize the IoT Central application for the operators who are responsible for the day-to-day use of the application, for example:
- Configuring custom dashboards to help operators discover insights and resolve issues faster.-- Configuring custom analytics to explore time series data from your connected devices.-- Defining the layout of properties and settings on a device template.+ ## Manage your devices
-As an operator, you use the IoT Central application to [manage the devices](howto-manage-devices.md) in your IoT Central solution. Operators do tasks such as:
-- Monitoring the devices connected to the application.-- Troubleshooting and remediating issues with devices.-- Provisioning new devices.
+With any IoT solution designed to operate at scale, a structured approach to device management is important. It's not enough just to connect your devices to the cloud, you need to keep your devices connected and healthy.
-You can [define custom rules and actions](howto-configure-rules.md) that operate over data streaming from connected devices. An operator can enable or disable these rules at the device level to control and automate tasks within the application.
+You can [manage the devices](howto-manage-devices.md) using your IoT Central application to do tasks such as:
-With any IoT solution designed to operate at scale, a structured approach to device management is important. It's not enough just to connect your devices to the cloud, you need to keep your devices connected and healthy. Use the following IoT Central capabilities to manage your devices throughout the application life cycle:
+- Monitoring the devices.
+- Troubleshooting and remediating issues with devices.
+- Perform bulk updates on devices.
### Dashboards
Build [custom rules](tutorial-create-telemetry-rules.md) based on device state a
[Jobs](howto-run-a-job.md) let you apply single or bulk updates to devices by setting properties or calling commands.
+### Analytics
+[Analytics](howto-create-analytics.md) exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices.
+ ## Integrate with other services As an application platform, IoT Central lets you transform your IoT data into the business insights that drive actionable outcomes. [Rules](./tutorial-create-telemetry-rules.md), [data export](./howto-export-data.md), and the [public REST API](/learn/modules/manage-iot-central-apps-with-rest-api/) are examples of how you can integrate IoT Central with line-of-business applications: ![How IoT Central can transform your IoT data](media/overview-iot-central/transform.png)
-You can generate business insights, such as determining machine efficiency trends or predicting future energy usage on a factory floor, by building custom analytics pipelines to process telemetry from your devices and store the results. Configure data exports in your IoT Central application to export telemetry, device property changes, and device template changes to other services where you can analyze, store, and visualize the data with your preferred tools.
+You can generate business insights, such as determining machine efficiency trends or predicting future energy usage on a factory floor, by building custom analytics pipelines to process telemetry from your devices and store the results. Configure data exports in your IoT Central application to export your data to other services where you can analyze, store, and visualize it with your preferred tools.
### Build custom IoT solutions and integrations with the REST APIs
IoT Central applications are fully hosted by Microsoft, which reduces the admini
## Pricing
-You can create IoT Central application using a 7-day free trial, or use a standard pricing plan.
+You can create an IoT Central application using a 7-day free trial, or use a standard pricing plan.
- Applications you create using the *free* plan are free for seven days and support up to five devices. You can convert them to use a standard pricing plan at any time before they expire.-- Applications you create using the *standard* plan are billed on a per device basis, you can choose either **Standard 0**, **Standard 1**, or **Standard 2** pricing plan with the first two devices being free. Learn more about [IoT Central pricing](https://aka.ms/iotcentral-pricing).-
-## Quotas
-
-Each Azure subscription has default quotas that could impact the scope of your IoT solution. Currently, IoT Central limits the number of applications you can deploy in a subscription to 10. If you need to increase this limit, contact [Microsoft support](https://azure.microsoft.com/support/options/).
-
-## Known issues
--- Continuous data export doesn't support the Avro format (incompatibility).-- GeoJSON isn't currently supported.-- Map tile isn't currently supported.-- Array schema types aren't supported.-- Only the C device SDK and the Node.js device and service SDKs are supported.-- IoT Central is currently available in the United States, Europe, Asia Pacific, Australia, United Kingdom, and Japan locations.
+- Applications you create using the *standard* plan are billed on a per device basis. You can choose either the **Standard 0**, **Standard 1**, or **Standard 2** pricing plan with the first two devices being free. Learn more about [IoT Central pricing](https://aka.ms/iotcentral-pricing).
## Next steps Now that you have an overview of IoT Central, here are some suggested next steps: -- If you're a device developer and want to dive into some code, the suggested next step is to [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md).-- Familiarize yourself with the [Azure IoT Central UI](overview-iot-central-tour.md). - Get started by [creating an Azure IoT Central application](quick-deploy-iot-central.md).
+- Familiarize yourself with the [Azure IoT Central UI](overview-iot-central-tour.md).
+- If you're a device developer and want to dive into some code, [create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md).
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
Learn how to import a new update into Device Update for IoT Hub. If you haven't
## Prerequisites
+* An existing update file that you want to deploy to devices. It can be an image file for image-based updating or an [APT Manifest file](device-update-apt-manifest.md) for package-based updating. ([How do I choose?](understand-device-update.md#support-for-a-wide-range-of-update-artifacts))
* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). * An IoT device (or simulator) provisioned for Device Update within IoT Hub.
- * If using a real device, youΓÇÖll need an update image file for image update, or [APT Manifest file](device-update-apt-manifest.md) for package update.
* [PowerShell 5](/powershell/scripting/install/installing-powershell) or later (includes Linux, macOS and Windows installs) * Supported browsers: * [Microsoft Edge](https://www.microsoft.com/edge)
Learn how to import a new update into Device Update for IoT Hub. If you haven't
## Create Device Update Import Manifest
-1. Ensure that your update image file or APT Manifest file is located in a directory accessible from PowerShell.
+1. If you haven't already done so, obtain an image file or APT manifest file that you want to deploy to devices. This might be from the manufacturer of your devices or a system integrator you work with, or even a group within your organization. Ensure that the update image file or APT Manifest file is located in a directory accessible from PowerShell.
2. Create a text file named **AduUpdate.psm1** in the directory where your update image file or APT Manifest file is located. Then open the [AduUpdate.psm1](https://github.com/Azure/iot-hub-device-update/tree/main/tools/AduCmdlets) PowerShell cmdlet, copy the contents to your text file, and then save the text file.
load-balancer Tutorial Multi Availability Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-multi-availability-sets-portal.md
+
+ Title: 'Tutorial: Create a load balancer with more than one availability set in the backend pool - Azure portal'
+
+description: In this tutorial, deploy an Azure Load Balancer with more than one availability set in the backend pool.
+++++ Last updated : 04/16/2021++
+# Tutorial: Create a load balancer with more than one availability set in the backend pool using the Azure portal
+
+As part of a high availability deployment, virtual machines are often grouped into multiple availability sets.
+
+Load Balancer supports more than one availability set with virtual machines in the backend pool.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual network
+> * Create a NAT gateway for outbound connectivity
+> * Create a standard SKU Azure Load Balancer
+> * Create four virtual machines and two availability sets
+> * Add virtual machines in availability sets to backend pool of load balancer
+> * Test the load balancer
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create a virtual network
+
+In this section, you'll create a virtual network for the load balancer and the other resources used in the tutorial.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual network**.
+
+3. In the search results, select **Virtual networks**.
+
+4. Select **+ Create**.
+
+5. In the **Basics** tab of the **Create virtual network**, enter, or select the following information:
+
+ | Setting | Value |
+ | - | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **TutorLBmultiAVS-rg** in **Name**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet**. |
+ | Region | Select **(US) West US 2**. |
+
+6. Select the **IP addresses** tab, or the **Next: IP Addresses** button at the bottom of the page.
+
+7. In the **IP addresses** tab, under **Subnet name** select **default**.
+
+8. In the **Edit subnet** pane, under **Subnet name** enter **myBackendSubnet**.
+
+9. Select **Save**.
+
+10. Select the **Security** tab, or the **Next: Security** button at the bottom of the page.
+
+11. In the **Security** tab, in **BastionHost** select **Enable**.
+
+12. Enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Bastion name | Enter **MyBastionHost**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27**. |
+ | Public IP address | Select **Create new**. </br> Enter **myBastionIP** in **Name**. |
+
+13. Select the **Review + create** tab, or the blue **Review + create** button at the bottom of the page.
+
+14. Select **Create**.
+
+## Create NAT gateway
+
+In this section, you'll create a NAT gateway for outbound connectivity of the virtual machines.
+
+1. In the search box at the top of the portal, enter **NAT gateway**.
+
+2. Select **NAT gateways** in the search results.
+
+3. Select **+ Create**.
+
+4. In the **Basics** tab of **Create network address translation (NAT) gateway**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorLBmultiAVS-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **(US) West US 2**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
+
+5. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
+
+6. Select **Create a new public IP address** next to **Public IP addresses** in the **Outbound IP** tab.
+
+7. Enter **myPublicIP-nat** in **Name**.
+
+8. Select **OK**.
+
+9. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
+
+10. Select **myVNet** in the pull-down menu under **Virtual network**.
+
+11. Select the check box next to **myBackendSubnet**.
+
+12. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+13. Select **Create**.
+
+## Create load balancer
+
+In this section, you'll create a load balancer for the virtual machines.
+
+1. In the search box at the top of the portal, enter **Load balancer**.
+
+2. Select **Load balancers** in the search results.
+
+3. Select **+ Create**.
+
+4. In the **Basics** tab of **Create load balancer**, enter, or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorLBmultiAVS-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer**. |
+ | Region | Select **(US) West US 2**. |
+ | Type | Leave the default of **Public**. |
+ | SKU | Leave the default of **Standard**. |
+ | Tier | Leave the default of **Regional**. |
+ | **Public IP address** | |
+ | Public IP address | Leave the default of **Create new**. |
+ | Public IP address name | Enter **myPublicIP-lb**. |
+ | Availability zone | Select **Zone-redundant**. |
+ | Add a public IPv6 address | Leave the default of **No**. |
+ | Routing preference | Leave the default of **Microsoft network**. |
+
+5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+6. Select **Create**.
+
+### Configure load balancer settings
+
+In this section, you'll create a backend pool for **myLoadBalancer**.
+
+You'll create a health probe to monitor **HTTP** and **Port 80**. The health probe will monitor the health of the virtual machines in the backend pool.
+
+You'll create a load-balancing rule for **Port 80** with outbound SNAT disabled. The NAT gateway you created earlier will handle the outbound connectivity of the virtual machines.
+
+1. In the search box at the top of the portal, enter **Load balancer**.
+
+2. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer**.
+
+4. In **myLoadBalancer**, select **Backend pools** in **Settings**.
+
+5. Select **+ Add** in **Backend pools**.
+
+6. In **Add backend pool**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myBackendPool**. |
+ | Virtual network | Select **myVNet**. |
+ | Backend Pool Configuration | Leave the default of **NIC**. |
+ | IP Version | Leave the default of **IPv4**. |
+
+7. Select **Add**.
+
+8. Select **Health probes**.
+
+9. Select **+ Add**.
+
+10. In **Add health probe**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPProbe**. |
+ | Protocol | Select **HTTP**. |
+ | Port | Leave the default of **80**. |
+ | Path | Leave the default of **/**. |
+ | Interval | Leave the default of **5** seconds. |
+ | Unhealthy threshold | Leave the default of **2** consecutive failures. |
+
+11. Select **Add**.
+
+12. Select **Load-balancing rules**.
+
+13. Select **+ Add**.
+
+14. Enter or select the following information in **Add load-balancing rule**:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule**. |
+ | IP Version | Leave the default of **IPv4**. |
+ | Frontend IP address | Select **LoadBalancerFrontEnd**. |
+ | Protocol | Select the default of **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Health probe | Select **myHTTPProbe**. |
+ | Session persistence | Leave the default of **None**. |
+ | Idle timeout (minutes) | Change the slider to **15**. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Leave the default of **Disabled**. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+5. Select **Add**.
+
+## Create virtual machines
+
+In this section, you'll create two availability groups with two virtual machines per group. These machines will be added to the backend pool of the load balancer during creation.
+
+### Create first set of VMs
+
+1. Select **+ Create a resource** in the upper left-hand section of the portal.
+
+2. In **New**, select **Compute** > **Virtual machine**.
+
+3. In the **Basics** tab of **Create a virtual machine**, enter, or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription |
+ | Resource group | Select **TutorLBmultiAVS-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1**. |
+ | Region | Select **(US) West US 2**. |
+ | Availability options | Select **Availability set**. |
+ | Availability set | Select **Create new**. </br> Enter **myAvailabilitySet1** in **Name**. </br> Select **OK**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen1**. |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Select a size for the virtual machine. |
+ | **Administrator account** | |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+4. Select the **Networking** tab, or select the **Next: Disks**, then **Next: Networking** button at the bottom of the page.
+
+5. In the **Networking** tab, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **myBackendSubnet**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select **Create new**. </br> In **Name**, enter **myNSG**. </br> Select **+Add an inbound rule** in **Inbound rules**. </br> Select **HTTP** for **Service**. </br> Enter **myHTTPrule** for **Name**. </br> Select **Add**. </br> Select **OK**. |
+ | **Load balancing** | |
+ | Place this virtual machine behind an existing load-balancing solution? | Select the check box. |
+ | **Load-balancing settings** | |
+ | Load-balancing options | Select **Azure load balancer**. |
+ | Select a load balancer | Select **myLoadBalancer**. |
+ | Select a backend pool | Select **myBackendPool**. |
+
+6. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+7. Select **Create**.
+
+8. Repeat steps 1 through seven to create the second virtual machine of the set. Replace the settings for the VM with the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myVM2**. |
+ | Availability set | Select **myAvailabilitySet1**. |
+ | Virtual Network | Select **myVNet**. |
+ | Subnet | Select **myBackendSubnet**. |
+ | Public IP | Select **None**. |
+ | Network security group | Select **myNSG**. |
+ | Load-balancing options | Select **Azure load balancer**. |
+ | Select a load balancer | Select **myLoadBalancer**. |
+ | Select a backend pool | Select **myBackendPool**. |
+
+### Create second set of VMs
+
+1. Select **+ Create a resource** in the upper left-hand section of the portal.
+
+2. In **New**, select **Compute** > **Virtual machine**.
+
+3. In the **Basics** tab of **Create a virtual machine**, enter, or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription |
+ | Resource group | Select **TutorLBmultiAVS-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM3**. |
+ | Region | Select **(US) West US 2**. |
+ | Availability options | Select **Availability set**. |
+ | Availability set | Select **Create new**. </br> Enter **myAvailabilitySet2** in **Name**. </br> Select **OK**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen1**. |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Select a size for the virtual machine. |
+ | **Administrator account** | |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+4. Select the **Networking** tab, or select the **Next: Disks**, then **Next: Networking** button at the bottom of the page.
+
+5. In the **Networking** tab, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **myBackendSubnet**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select **myNSG**. |
+ | **Load balancing** | |
+ | Place this virtual machine behind an existing load-balancing solution? | Select the check box. |
+ | **Load-balancing settings** | |
+ | Load-balancing options | Select **Azure load balancer**. |
+ | Select a load balancer | Select **myLoadBalancer**. |
+ | Select a backend pool | Select **myBackendPool**. |
+
+6. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+7. Select **Create**.
+
+8. Repeat steps 1 through seven to create the second virtual machine of the set. Replace the settings for the VM with the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myVM4**. |
+ | Availability set | Select **myAvailabilitySet2**. |
+ | Virtual Network | Select **myVNet**. |
+ | Network security group | Select **myNSG**. |
+ | Load-balancing options | Select **Azure load balancer**. |
+ | Select a load balancer | Select **myLoadBalancer**. |
+ | Select a backend pool | Select **myBackendPool**. |
+
+## Install IIS
+
+In this section, you'll use the Azure Bastion host you created previously to connect to the virtual machines and install IIS.
+
+1. In the search box at the top of the portal, enter **Virtual machine**.
+
+2. Select **Virtual machines** in the search results.
+
+3. Select **myVM1**.
+
+4. In the **Overview** page of myVM1, select **Connect** > **Bastion**.
+
+5. Select **Use Bastion**.
+
+6. Enter the **Username** and **Password** you created when you created the virtual machine.
+
+7. Select **Connect**.
+
+7. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell**.
+
+8. In the PowerShell Window, run the following commands to:
+
+ * Install the IIS server
+ * Remove the default iisstart.htm file
+ * Add a new iisstart.htm file that displays the name of the VM:
+
+ ```powershell
+ # Install IIS server role
+ Install-WindowsFeature -name Web-Server -IncludeManagementTools
+
+ # Remove default htm file
+ Remove-Item C:\inetpub\wwwroot\iisstart.htm
+
+ # Add a new htm file that displays server name
+ Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
+ ```
+8. Close the Bastion session with **myVM1**.
+
+9. Repeat steps 1 through eight for **myVM2**, **myVM3**, and **myVM4**.
+
+## Test the load balancer
+
+In this section, you'll discover the public IP address of the load balancer. You'll use the IP address to test the operation of the load balancer.
+
+1. In the search box at the top of the portal, enter **Public IP**.
+
+2. Select **Public IP addresses** in the search results.
+
+3. Select **myPublicIP-lb**.
+
+4. Note the public IP address listed in **IP address** in the **Overview** page of **myPublicIP-lb**:
+
+ :::image type="content" source="./media/tutorial-multi-availability-sets-portal/find-public-ip.png" alt-text="Find the public IP address of the load balancer." border="true":::
+
+5. Open a web browser and enter the public IP address in the address bar:
+
+ :::image type="content" source="./media/tutorial-multi-availability-sets-portal/verify-load-balancer.png" alt-text="Test load balancer with web browser." border="true":::
+
+6. Select refresh in the browser to see the traffic balanced to the other virtual machines in the backend pool.
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the load balancer and the supporting resources with the following steps:
+
+1. In the search box at the top of the portal, enter **Resource group**.
+
+2. Select **Resource groups** in the search results.
+
+3. Select **TutorLBmultiAVS-rg**.
+
+4. In the overview page of **TutorLBmultiAVS-rg**, select **Delete resource group**.
+
+5. Enter **TutorLBmultiAVS-rg** in **TYPE THE RESOURCE GROUP NAME**.
+
+6. Select **Delete**.
+
+## Next steps
+
+In this tutorial, you:
+
+* Created a virtual network and Azure Bastion host.
+* Created an Azure Standard Load Balancer.
+* Created two availability sets with two virtual machines per set.
+* Installed IIS and tested the load balancer.
+
+Advance to the next article to learn how to create a cross-region Azure Load Balancer:
+> [!div class="nextstepaction"]
+> [Create a cross-region load balancer](tutorial-cross-region-portal.md)
+
marketplace Gtm How To Get Featured https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/gtm-how-to-get-featured.md
Title: Go-To-Market Services - How to get featured in marketplace | Azure Marketplace
-description: Go-To-Market Services - This section describes how to get a listing featured in Azure Marketplace
+description: Go-To-Market Services - This section describes listing featured in Microsoft AppSource and Azure Marketplace
Previously updated : 04/16/2020 Last updated : 04/15/2021
-# How to get featured in AppSource and Azure Marketplace
+# How to get featured in Microsoft AppSource and Azure Marketplace
Azure Marketplace and AppSource have **featured apps** sections, where you can get your app featured:
-* First, if you have a TRIAL or TRANSACTION offer, you can use your "category promotion" benefit through [commercial marketplace benefits](gtm-your-marketplace-benefits.md).
+* First, if you have a TRIAL, CONSULTING or TRANSACTION offer, you may be eligible for the **Commercial marketplace featured category placement benefit** through [commercial marketplace benefits](/azure/marketplace/gtm-your-marketplace-benefits). Once eligible, your commercial marketplace listing will automatically be featured in one of the categories found on the left-hand side of your listing. The category and timing of your featured placement will be based on availability. In order to make your marketplace listing even more robust, check out this video on [Best practices for optimizing your listing (microsoft.com)](https://partner.microsoft.com/asset/detail/best-practices-for-optimizing-your-listing-mp4).
* Second, review the list of best practices and criteria below to earn a spot. The featured apps selection algorithm generates a score to an app by Microsoft, just like a person's credit score in the US. The weekly selection of featured apps will be based on a calculation of app and service performance.
-## Steps to take
+## Steps to improve your score
You can take the following action items to improve your score: 1. *Ensure that your app or service is appropriately categorized*: choose three categories that represent your app or service's capabilities. 2. *Azure Marketplace Apps: grow your Azure consumption month-over-month.* If you are able to achieve 1,000 hours of Azure usage a month, you will greatly increase your chances of being featured. 3. *AppSource Apps: increase the acquisitions coming to your offer.* If you are able to achieve 10 acquisitions per month, you will greatly increase your chances of being featured.
-4. *Achieve Co-Sell ready status*: complete the [requirements for co-sell ready](/legal/marketplace/certification-policies#3000-requirements-for-co-sell-status).
+4. *Achieve co-sell ready status*: complete the [requirements for co-sell ready](/legal/marketplace/certification-policies#3000-requirements-for-co-sell-status).
5. *Improve the quality of your offer*: see [content listing guidelines](marketplace-criteria-content-validation.md) for information on how to modify your offer 6. Publish multiple offers in Marketplace: are all your core apps and services listed? Do you have a trial experience? 7. Encourage your customers to write reviews.
Learn more about your [commercial marketplace benefits](gtm-your-marketplace-ben
Sign in to [Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create and configure your offer. -+
marketplace Policies Terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/policies-terms.md
Previously updated : 09/09/2020 Last updated : 04/16/2021 # Commercial marketplace policies and terms
->[!Note]
->"Microsoft commercial marketplace" means a Microsoft owned or operated platform, however named, through which offers may be presented to or acquired by customers. Unless otherwise specified, the commercial marketplace includes Microsoft AppSource and Azure Marketplace.
+The _Microsoft commercial marketplace_ is a Microsoft owned or operated platform, through which offers may be presented to or acquired by customers. Unless otherwise specified, the commercial marketplace includes Microsoft AppSource and Azure Marketplace.
Thank you for your interest in publishing offers on the commercial marketplace. We're committed to partnering with you to build a rich source of cloud solutions and a collection of business offers that both delight customers worldwide and help you build your business.
-Offers on the commercial marketplace must comply with our policies and terms. We update these from time to time to ensure a good customer experience and help our partners succeed. To leave feedback on our policies or terms, see [Microsoft AppSource and Azure Marketplace forum](https://www.microsoftpartnercommunity.com/t5/Azure-Marketplace-and-AppSource/bd-p/2222).
+Offers on the commercial marketplace must comply with our policies and terms. We update them from time to time to ensure a good customer experience and help our partners succeed. You can leave feedback on our policies or terms, in the [Microsoft AppSource and Azure Marketplace forum](https://www.microsoftpartnercommunity.com/t5/Azure-Marketplace-and-AppSource/bd-p/2222).
+
+## Publisher Agreement
+
+- [Microsoft Publisher Agreement](https://go.microsoft.com/fwlink/?LinkID=699560)
+- [Change history for Microsoft Publisher Agreement](https://go.microsoft.com/fwlink/?linkid=2159975&clcid=0x409)
+
+The Microsoft Publisher Agreement describes the relationship for publishing offers on the commercial marketplace. It governs your access and use of features on Partner Center related to publishing and listing offers in the commercial marketplace online stores.
## Policies and terms - [Commercial marketplace certification policies](/legal/marketplace/certification-policies?context=/azure/marketplace/context/context)
+- [Certification policies change history](/legal/marketplace/offer-policies-change-history)
- [Microsoft AppSource and Azure Marketplace review policies](/legal/marketplace/rating-review-policies?context=/azure/marketplace/context/context)-- [Azure Marketplace terms](/legal/marketplace/terms?context=/azure/marketplace/context/context) ## Next steps
media-services Architecture Azure Ad Content Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/architecture-azure-ad-content-protection.md
if (tokenClaims != null && tokenClaims.Length > 0)
} ```
-The *groups* claim is a member of a [Restricted Claim Set](../../active-directory/develop/active-directory-claims-mapping.md#claim-sets) in Azure AD.
+The *groups* claim is a member of a [Restricted Claim Set](../../active-directory/develop/reference-claims-mapping-policy-type.md#claim-sets) in Azure AD.
#### Test
media-services Media Services Rest Deliver Streaming Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-rest-deliver-streaming-content.md
The [following](#types) section shows the enum types whose values are used in th
For information on how to connect to the AMS API, see [Access the Azure Media Services API with Azure AD authentication](media-services-use-aad-auth-to-access-ams-api.md). >[!NOTE]
->After successfully connecting to https://media.windows.net, you will receive a 301 redirect specifying another Media Services URI. You must make subsequent calls to the new URI.
+>After successfully connecting to `https://media.windows.net`, you will receive a 301 redirect specifying another Media Services URI. You must make subsequent calls to the new URI.
## Create an OnDemand streaming locator To create the OnDemand streaming locator and get URLs, you need to do the following:
media-services Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/language-identification-model.md
Make sure to review the [Guidelines and limitations](#guidelines-and-limitations
## Choosing auto language identification on indexing
-When indexing or [re-indexing](https://api-portal.videoindexer.ai/docs/services/operations/operations/Re-Index-Video?) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter.
+When indexing or [re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter.
When using portal, go to your **Account videos** on the [Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to re-index. On the right-bottom corner click the re-index button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box.
media-services Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/logic-apps-connector-tutorial.md
Last updated 09/21/2020
# Tutorial: use Video Indexer with Logic App and Power Automate
-Azure Media Services [Video Indexer v2 REST API](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Delete-Video?) supports both server-to-server and client-to-server communication and enables Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
+Azure Media Services [Video Indexer v2 REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
To make the integration even easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://preview.flow.microsoft.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with our API. You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for your integration gives you better visibility on the health of your workflow and an easy way to debug it. 
This tutorial showed just one Video Indexer connectors example. You can use the
> [!div class="nextstepaction"] > [Use the Video Indexer API](video-indexer-use-apis.md)
-For additional resources, refer to this document on [video indexer.](/connectors/videoindexer-v2/)
+For additional resources, refer to this document on [video indexer.](/connectors/videoindexer-v2/)
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-power-bi-tenant.md
To set up authentication, create a security group and add the Purview managed id
Now that you've given the Purview Managed Identity permissions to connect to the Admin API of your Power BI tenant, you can set up your scan from the Azure Purview Studio.
-1. Select the **Management Center** icon.
+1. Select the **Sources** on the left navigation.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/management-center.png" alt-text="Management center icon.":::
-
-1. Then select **+ New** on **Data sources**.
-
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/data-sources.png" alt-text="Image of new data source button":::
+1. Then select **Register**.
Select **Power BI** as your data source.
resource-mover Tutorial Move Region Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
Title: Move encrypted Azure VMs across regions with Azure Resource Mover
-description: Learn how to move encrypted Azure VMs to another region with Azure Resource Mover
+ Title: Move encrypted Azure VMs across regions by using Azure Resource Mover
+description: Learn how to move encrypted Azure VMs to another region by using Azure Resource Mover.
Last updated 02/10/2021
#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.- # Tutorial: Move encrypted Azure VMs across regions
-In this article, learn how to move encrypted Azure VMs to a different Azure region using [Azure Resource Mover](overview.md). Here's what we mean by encryption:
+This article discusses how to move encrypted Azure virtual machines (VMs) to a different Azure region by using [Azure Resource Mover](overview.md).
+
+Encrypted VMS can be described as either:
-- VMs that have disks with Azure disk encryption enabled. [Learn more](../virtual-machines/windows/disk-encryption-portal-quickstart.md)-- Or, VMs that use customer-managed keys (CMKs) for encryption-at-rest (server-side encryption). [Learn more](../virtual-machines/disks-enable-customer-managed-keys-portal.md)
+- VMs that have disks with Azure Disk Encryption enabled. For more information, see [Create and encrypt a Windows virtual machine by using the Azure portal](../virtual-machines/windows/disk-encryption-portal-quickstart.md).
+- VMs that use customer-managed keys (CMKs) for encryption at rest, or server-side encryption. For more information, see [Use the Azure portal to enable server-side encryption with customer-managed keys for managed disks](../virtual-machines/disks-enable-customer-managed-keys-portal.md).
In this tutorial, you learn how to: > [!div class="checklist"] > * Check prerequisites.
-> * For VMs with Azure disk encryption enabled, copy keys and secrets from the source region key vault to the destination region key vault.
-> * Prepare VMs to move them, and select resources in the source region that you want to move.
+> * For VMs with Azure Disk Encryption enabled, copy keys and secrets from the source-region key vault to the destination-region key vault.
+> * Prepare to move VMs and to select resources in the source region that you want to move them from.
> * Resolve resource dependencies.
-> * For VMs with Azure disk encryption enabled, manually assign the destination key vault. For VMs using server-side encryption with customer-managed keys, manually assign a disk encryption set in the destination region.
-> * Move the key vault and/or disk encryption set.
+> * For VMs with Azure Disk Encryption enabled, manually assign the destination key vault. For VMs that use server-side encryption with customer-managed keys, manually assign a disk encryption set in the destination region.
+> * Move the key vault or disk encryption set.
> * Prepare and move the source resource group. > * Prepare and move the other resources.
-> * Decide whether you want to discard or commit the move.
-> * Optionally remove resources in the source region after the move.
+> * Decide whether to discard or commit the move.
+> * Optionally, remove resources in the source region after the move.
> [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options.
+> This tutorial shows the quickest path for trying out a scenario. It uses only the default options.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Then sign in to the [Azure portal](https://portal.azure.com).
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Then, sign in to the [Azure portal](https://portal.azure.com).
## Prerequisites
-**Requirement** |**Details**
+Requirement |Details
|
-**Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move.<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor and User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.
-**VM support** | Check that the VMs you want to move are supported.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/><br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.
-**Key vault requirements (Azure disk encryption)** | If you have Azure disk encryption enabled for VMs, in addition to the key vault in the source region, you need a key vault in the destination region. [Create a key vault](../key-vault/general/quick-create-portal.md).<br/><br/> For the key vaults in the source and target region, you need these permissions:<br/><br/> - Key permissions: Key Management Operations (Get, List); Cryptographic Operations (Decrypt and Encrypt).<br/><br/> - Secret permissions: Secret Management Operations (Get, List and Set)<br/><br/> - Certificate (List and Get).
-**Disk encryption set (server-side encryption with CMK)** | If you're using VMs with server-side encryption using a CMK, in addition to the disk encryption set in the source region, you need a disk encryption set in the destination region. [Create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set).<br/><br/> Moving between regions isn't supported if you're using HSM keys for customer-managed keys.
+**Subscription permissions** | Check to ensure that you have *Owner* access on the subscription that contains the resources you want to move.<br/><br/> *Why do I need Owner access?* The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types), formerly known as the Managed Service Identity (MSI). This identity is trusted by the subscription. Before you can create the identity and assign it the required roles (*Contributor* and *User access administrator* in the source subscription), the account you use to add resources needs *Owner* permissions in the subscription. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD roles](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).
+**VM support** | Check to ensure that the VMs you want to move are supported by doing the following:<li>[Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<li>[Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<li>Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.
+**Key vault requirements (Azure Disk Encryption)** | If you have Azure Disk Encryption enabled for VMs, you need a key vault in both the source and destination regions. For more information, see [Create a key vault](../key-vault/general/quick-create-portal.md).<br/><br/> For the key vaults in the source and destination regions, you need these permissions:<li>Key permissions: Key Management Operations (Get, List) and Cryptographic Operations (Decrypt and Encrypt)<li>Secret permissions: Secret Management Operations (Get, List, and Set)<li>Certificate (List and Get)
+**Disk encryption set (server-side encryption with CMK)** | If you're using VMs with server-side encryption that uses a CMK, you need a disk encryption set in both the source and destination regions. For more information, see [Create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set).<br/><br/> Moving between regions isn't supported if you're using hardware security module (HSM keys) for customer-managed keys.
**Target region quota** | The subscription needs enough quota to create the resources you're moving in the target region. If it doesn't have quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
-**Target region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.
+**Target region charges** | Verify the pricing and charges that are associated with the target region to which you're moving the VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/).
-## Verify user permissions on key vault for VMS using Azure Disk Encryption (ADE)
+## Verify permissions in the key vault
-If you're moving VMs that have Azure disk encryption enabled, you need to run a script as mentioned [below](#copy-the-keys-to-the-destination-key-vault) for which the user executing the script should have appropriate permissions. Please refer to below table to know about permissions needed. The options to change the permissions can be found by navigating to the key vault in the Azure portal, Under **Settings**, select **Access policies**.
+If you're moving VMs that have Azure Disk Encryption enabled, you need to run a script as mentioned in the [Copy the keys to the destination key vault](#copy-the-keys-to-the-destination-key-vault) section. The users who execute the script should have appropriate permissions to do so. To understand which permissions are needed, refer to the following table. You'll find the options for changing the permissions by going to the key vault in the Azure portal. Under **Settings**, select **Access policies**.
-If there are no user permissions, select **Add Access Policy**, and specify the permissions. If the user account already has a policy, under **User**, set the permissions as per the table below.
+If the user permissions aren't in place, select **Add Access Policy**, and then specify the permissions. If the user account already has a policy, under **User**, set the permissions according to the instructions in the following table.
-Azure VMs using ADE can have the following variations and the permissions need to be set accordingly for relevant components.
-- Default option where the disk is encrypted using only secrets-- Added security using [key encryption key](../virtual-machines/windows/disk-encryption-key-vault.md#set-up-a-key-encryption-key-kek)
+Azure VMs that use Azure Disk Encryption can have the following variations, and you'll need to set the permissions according to their relevant components. The VMs might have:
+- A default option where the disk is encrypted with secrets only.
+- Added security that uses a [Key Encryption Key (KEK)](../virtual-machines/windows/disk-encryption-key-vault.md#set-up-a-key-encryption-key-kek).
-### Source region keyvault
+### Source region key vault
-The below permissions need to be set for the user executing the script
+For users who execute the script, set permissions for the following components:
-**Component** | **Permission needed**
+Component | Permissions needed
|
-Secrets| Get permission <br> </br> In **Secret permissions**> **Secret Management Operations**, select **Get**
-Keys <br> </br> If you are using Key encryption key (KEK) you need this permission in addition to secrets| Get and Decrypt permission <br> </br> In **Key Permissions** > **Key Management Operations**, select **Get**. In **Cryptographic Operations**, select **Decrypt**.
+Secrets | *Get* <br></br> Select **Secret permissions** > **Secret Management Operations**, and then select **Get**.
+Keys <br></br> If you're using a KEK, you need these permissions in addition to the permissions for secrets. | *Get* and *Decrypt* <br></br> Select **Key Permissions** > **Key Management Operations**, and then select **Get**. In **Cryptographic Operations**, select **Decrypt**.
-### Destination region keyvault
+### Destination region key vault
In **Access policies**, make sure that **Azure Disk Encryption for volume encryption** is enabled.
-The below permissions need to be set for the user executing the script
+For users who execute the script, set permissions for the following components:
-**Component** | **Permission needed**
+Component | Permissions needed
|
-Secrets| Set permission <br> </br> In **Secret permissions**> **Secret Management Operations**, select **Set**
-Keys <br> </br> If you are using Key encryption key (KEK) you need this permission in addition to secrets| Get, Create and Encrypt permission <br> </br> In **Key Permissions** > **Key Management Operations**, select **Get** and **Create** . In **Cryptographic Operations**, select **Encrypt**.
+Secrets | *Set* <br></br> Select **Secret permissions** > **Secret Management Operations**, and then select **Set**.
+Keys <br></br> If you're using a KEK, you need these permissions in addition to the permissions for secrets. | *Get*, *Create*, and *Encrypt* <br></br> Select **Key Permissions** > **Key Management Operations**, and then select **Get** and **Create** . In **Cryptographic Operations**, select **Encrypt**.
-In addition to the the above permissions, in the destination key vault you need to add permissions for the [Managed System Identity](./common-questions.md#how-is-managed-identity-used-in-resource-mover) that Resource Mover uses for accessing the Azure resources on your behalf.
+<br>
-1. Under **Settings**, select **Add Access policies**.
-2. In **Select principal**, search for the MSI. The MSI name is ```movecollection-<sourceregion>-<target-region>-<metadata-region>```.
-3. Add the below permissions for the MSI
+In addition to the preceding permissions, in the destination key vault, you need to add permissions for the [Managed System Identity](./common-questions.md#how-is-managed-identity-used-in-resource-mover) that Resource Mover uses to access the Azure resources on your behalf.
-**Component** | **Permission needed**
- |
-Secrets| Get and List permission <br> </br> In **Secret permissions**> **Secret Management Operations**, select **Get** and **List**
-Keys <br> </br> If you are using Key encryption key (KEK) you need this permission in addition to secrets| Get, List permission <br> </br> In **Key Permissions** > **Key Management Operations**, select **Get** and **List**
+1. Under **Settings**, select **Add Access policies**.
+1. In **Select principal**, search for the MSI. The MSI name is ```movecollection-<sourceregion>-<target-region>-<metadata-region>```.
+1. For the MSI, add the following permissions:
+ Component | Permissions needed
+ |
+ Secrets| *Get* and *List* <br></br> Select **Secret permissions** > **Secret Management Operations**, and then select **Get** and **List**.
+ Keys <br></br> If you're using a KEK, you need these permissions in addition to the permissions for secrets. | *Get* and *List* <br></br> Select **Key Permissions** > **Key Management Operations**, and then select **Get** and **List**.
+<br>
### Copy the keys to the destination key vault
-You need to copy the encryption secrets and keys from the source key vault to the destination key vault, using a script we provide.
+Copy the encryption secrets and keys from the source key vault to the destination key vault by using a [script](https://raw.githubusercontent.com/AsrOneSdk/published-scripts/master/CopyKeys/CopyKeys.ps1) that we provide.
-- You run the script in PowerShell. We recommend running the latest PowerShell version.
+- Run the script in PowerShell. We recommend that you use the latest PowerShell version.
- Specifically, the script requires these modules: - Az.Compute
- - Az.KeyVault (version 3.0.0
+ - Az.KeyVault (version 3.0.0)
- Az.Accounts (version 2.2.3)
-Run as follows:
-
-1. Navigate to the [script](https://raw.githubusercontent.com/AsrOneSdk/published-scripts/master/CopyKeys/CopyKeys.ps1) in GitHub.
-2. Copy the contents of the script to a local file, and name it *Copy-keys.ps1*.
-3. Run the script.
-4. Sign into Azure.
-5. In the **User Input** pop-up, select the source subscription, resource group, and source VM. Then select the target location, and the target vaults for disk and key encryption.
+To run the script, do the following:
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/script-input.png" alt-text="Pop up to input script values." :::
+1. Open the [script](https://raw.githubusercontent.com/AsrOneSdk/published-scripts/master/CopyKeys/CopyKeys.ps1) in GitHub.
+1. Copy the contents of the script to a local file, and name it *Copy-keys.ps1*.
+1. Run the script.
+1. Sign in to the Azure portal.
+1. In the drop-down lists in the **User Inputs** window, select the source subscription, resource group, and source VM, and then select the target location, and the target vaults for disk and key encryption.
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/script-input.png" alt-text="Screenshot of the 'User Inputs' window for entering the script values." :::
-6. When the script completes, screen output indicates that CopyKeys succeeded.
+1. Select the **Select** button.
+
+ When the script has finished running, a message notifies you that CopyKeys has succeeded.
## Prepare VMs
-1. After [checking that VMs meet requirements](#prerequisites), make sure that VMs you want to move are turned on. All VMs disks that you want to be available in the destination region must be attached and initialized in the VM.
-3. Check that VMs have the latest trusted root certificates, and an updated certificate revocation list (CRL). To do this:
+1. After you've checked to ensure that the VMs satisfy the [prerequisites](#prerequisites), make sure that the VMs you want to move are turned on. All VM disks that you want to be available in the destination region must be attached and initialized in the VM.
+1. To ensure that the VMs have the latest trusted root certificates and an updated certificate revocation list (CRL), do the following:
- On Windows VMs, install the latest Windows updates.
- - On Linux VMs, follow distributor guidance so that machines have the latest certificates and CRL.
-4. Allow outbound connectivity from VMs as follows:
- - If you're using a URL-based firewall proxy to control outbound connectivity, allow access to these [URLs](support-matrix-move-region-azure-vm.md#url-access)
+ - On Linux VMs, follow distributor guidance so that the machines have the latest certificates and CRL.
+1. To allow outbound connectivity from the VMs, do either of the following:
+ - If you're using a URL-based firewall proxy to control outbound connectivity, [allow access to the URLs](support-matrix-move-region-azure-vm.md#url-access).
- If you're using network security group (NSG) rules to control outbound connectivity, create these [service tag rules](support-matrix-move-region-azure-vm.md#nsg-rules).
-## Select resources to move
-
+## Select the resources to move
- You can select any supported resource type in any of the resource groups in the source region you select. -- You move resources to a target region that's in the same subscription as the source region. If you want to change the subscription, you can do that after the resources are moved.
+- You can move resources to a target region that's in the same subscription as the source region. If you want to change the subscription, you can do so after the resources are moved.
-Select resources as follows:
+To select the resources, do the following:
-1. In the Azure portal, search for *resource mover*. Then, under **Services**, select **Azure Resource Mover**.
+1. In the Azure portal, search for **resource mover**. Then, under **Services**, select **Azure Resource Mover**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/search.png" alt-text="Search results for resource mover in the Azure portal." :::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/search.png" alt-text="Screenshot of search results for Azure Resource Mover in the Azure portal." :::
-2. In **Overview**, click **Move across regions**.
+1. On the Azure Resource Mover **Overview** pane, select **Move across regions**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/move-across-regions.png" alt-text="Button to add resources to move to another region." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/move-across-regions.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/move-across-regions.png" alt-text="Screenshot of the 'Move across regions' button for adding resources to move to another region." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/move-across-regions.png":::
-3. In **Move resources** > **Source + destination**, select the source subscription and region.
-4. In **Destination**, select the region to which you want to move the VMs. Then click **Next**.
+1. On the **Move resources** pane, select the **Source + destination** tab. Then, in the drop-down lists, select the source subscription and region.
:::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/source-target.png" alt-text="Page to select source and destination region.." :::
-5. In **Resources to move**, click **Select resources**.
+1. Under **Destination**, select the region where you want to move the VMs, and then select **Next**.
+
+1. Select the **Resources to move** tab, and then select **Select resources**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-resources.png" alt-text="Button to select resource to move.]." :::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-resources.png" alt-text="Screenshot of the 'Move resources' pane and 'Select resources' button.]." :::
-6. In **Select resources**, select the VMs. You can only add resources that are [supported for move](#prepare-vms). Then click **Done**.
+1. On the **Select resources** pane, select the VMs you want to move. As mentioned in the [Select the resources to move](#select-the-resources-to-move) section, you can add only resources that are supported for a move.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-vm.png" alt-text="Page to select VMs to move." :::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-vm.png" alt-text="Screenshot of the 'Select resources' pane for selecting VMs to move." :::
> [!NOTE]
- > In this tutorial we're selecting a VM that uses server-side encryption (rayne-vm) with a customer-managed key, and a VM with disk encryption enabled (rayne-vm-ade).
+ > In this tutorial, you're selecting a VM that uses server-side encryption (rayne-vm) with a customer-managed key, and a VM with disk encryption enabled (rayne-vm-ade).
-7. In **Resources to move**, click **Next**.
-8. In **Review**, check the source and destination settings.
+1. Select **Done**.
+1. Select the **Resources to move** tab, and then select **Next**.
+1. Select the **Review** tab, and then check the source and destination settings.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/review.png" alt-text="Page to review settings and proceed with move." :::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/review.png" alt-text="Screenshot of the pane for reviewing source and destination settings." :::
-9. Click **Proceed**, to begin adding the resources.
-10. Select the notifications icon to track progress. After the add process finishes successfully, select **Added resources for move** in the notifications.
+1. Select **Proceed** to begin adding the resources.
+1. Select the notifications icon to track progress. After the process finishes successfully, on the **Notifications** pane, select **Added resources for move**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/added-resources-notification.png" alt-text="Notification to confirm resources were added successfully." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/added-resources-notification.png":::
-
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/added-resources-notification.png" alt-text="Screenshot of the 'Notifications' pane for confirming that resources were added successfully." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/added-resources-notification.png":::
-11. After clicking the notification, review the resources on the **Across regions** page.
+1. After you select the notification, review the resources on the **Across regions** page.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resources-prepare-pending.png" alt-text="Pages showing added resources with prepare pending." :::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resources-prepare-pending.png" alt-text="Screenshot of added resources with a 'Prepare pending' status." :::
> [!NOTE]
-> - Resources you add are placed into a *Prepare pending* state.
+> - The resources you add are placed into a *Prepare pending* state.
> - The resource group for the VMs is added automatically.
-> - If you modify the **Destination configuration** entries to use a resource that already exists in the destination region, the resource state is set to *Commit pending*, since you don't need to initiate a move for it.
-> - If you want to remove a resource that's been added, the method for doing that depends on where you are in the move process. [Learn more](remove-move-resources.md).
+> - If you modify the **Destination configuration** entries to use a resource that already exists in the destination region, the resource state is set to *Commit pending*, because you don't need to initiate a move for it.
+> - If you want to remove a resource that's been added, the method you'll use depends on where you are in the move process. For more information, see [Manage move collections and resource groups](remove-move-resources.md).
## Resolve dependencies 1. If any resources show a *Validate dependencies* message in the **Issues** column, select the **Validate dependencies** button.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/check-dependencies.png" alt-text="NButton to check dependencies." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/check-dependencies.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/check-dependencies.png" alt-text="Screenshot showing the 'Validate dependencies' button." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/check-dependencies.png":::
The validation process begins.
-2. If dependencies are found, click **Add dependencies**
+1. If dependencies are found, select **Add dependencies**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/add-dependencies.png" alt-text="Button to add dependencies." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/add-dependencies.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/add-dependencies.png" alt-text="Screenshot of the 'Add dependencies' button." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/add-dependencies.png":::
-3. In **Add dependencies**, leave the default **Show all dependencies** option.
+1. On the **Add dependencies** pane, leave the default **Show all dependencies** option.
- - **Show all dependencies** iterates through all of the direct and indirect dependencies for a resource. For example, for a VM it shows the NIC, virtual network, network security groups (NSGs) etc.
- - **Show first level dependencies only** shows only direct dependencies. For example, for a VM it shows the NIC, but not the virtual network.
+ - **Show all dependencies** iterates through all the direct and indirect dependencies for a resource. For example, for a VM, it shows the NIC, virtual network, network security groups (NSGs), and so on.
+ - **Show first-level dependencies only** shows only direct dependencies. For example, for a VM it shows the NIC but not the virtual network.
-4. Select the dependent resources you want to add > **Add dependencies**.
+1. Select the dependent resources you want to add, and then select **Add dependencies**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-dependencies.png" alt-text="Select dependencies from dependencies list." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/select-dependencies.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-dependencies.png" alt-text="Screenshot of the dependencies list and the 'Add dependencies' button." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/select-dependencies.png":::
-5. Validate dependencies again.
+1. Validate the dependencies again.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/validate-again.png" alt-text="Page to validate again." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/validate-again.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/validate-again.png" alt-text="Screenshot of the pane for revalidating the dependencies." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/validate-again.png":::
## Assign destination resources
-Destination resources associated with encryption need manual assignment.
+You need to manually assign destination resources that are associated with encryption.
-- If you're moving a VM that's has Azure disk encryption (ADE), the key vault in your destination region will appear as a dependency.-- If you're moving a VM that has server-side encryption that uses custom-managed keys (CMKs), then the disk encryption set in the destination region appears as a dependency. -- Since this tutorial is moving a VM with ADE enabled, and a VM using a CMK, both the destination key vault and disk encryption set show up as dependencies.
+- If you're moving a VM that has Azure Disk Encryption enabled, the key vault in your destination region appears as a dependency.
+- If you're moving a VM with server-side encryption that uses CMKs, the disk encryption set in the destination region appears as a dependency.
+- Because this tutorial demonstrates moving a VM that has Azure Disk Encryption enabled and that uses a CMK, both the destination key vault and the disk encryption set show up as dependencies.
-Assign manually as follows:
+To assign the destination resources manually, do the following:
1. In the disk encryption set entry, select **Resource not assigned** in the **Destination configuration** column.
-2. In **Configuration settings**, select the destination disk encryption set. Then select **Save changes**.
-3. You can select to save and validate dependencies for the resource you're modifying, or you can just save the changes, and validate everything you modify in one go.
+1. In **Configuration settings**, select the destination disk encryption set, and then select **Save changes**.
+1. You can save and validate dependencies for the resource you're modifying, or you can save only the changes, and then validate everything you modify at the same time.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-destination-set.png" alt-text="Page to select disk encryption set in destination region." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/select-destination-set.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/select-destination-set.png" alt-text="Screenshot of the 'Destination configuration' pane for saving changes in the destination region." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/select-destination-set.png":::
- After adding the destination resource, the status of the disk encryption set turns to *Commit move pending*.
-3. In the key vault entry, select **Resource not assigned** in the **Destination configuration** column. **Configuration settings**, select the destination key vault. Save the changes.
+ After you've added the destination resource, the status of the disk encryption set is changed to *Commit move pending*.
-At this stage both the disk encryption set and the key vault status turns to *Commit move pending*.
+1. In the key vault entry, select **Resource not assigned** in the **Destination configuration** column. Under **Configuration settings**, select the destination key vault, and then save your changes.
+At this stage, the disk encryption set and key vault statuses are changed to *Commit move pending*.
-To commit and finish the move process for encryption resources.
-1. In **Across regions**, select the resource (disk encryption set or key vault) > **Commit move**.
-2. ln **Move Resources**, click **Commit**.
+To commit and finish the move process for encryption resources, do the following:
+
+1. In **Across regions**, select the resource (disk encryption set or key vault), and then select **Commit move**.
+1. In **Move Resources**, select **Commit**.
> [!NOTE]
-> After committing the move, the resource is in a *Delete source pending* state.
+> After you've committed the move, the resource status changes to *Delete source pending*.
## Move the source resource group
Before you can prepare and move VMs, the VM resource group must be present in th
### Prepare to move the source resource group
-During the Prepare process, Resource Mover generates Azure Resource Manager (ARM) templates using the resource group settings. Resources inside the resource group aren't affected.
+During the preparation process, Resource Mover generates Azure Resource Manager (ARM) templates from the resource group settings. The resources inside the resource group are unaffected.
-Prepare as follows:
+To prepare to move the source resource group, do the following:
-1. In **Across regions**, select the source resource group > **Prepare**.
+1. In **Across regions**, select the source resource group, and then select **Prepare**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/prepare-resource-group.png" alt-text="Prepare resource group." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/prepare-resource-group.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/prepare-resource-group.png" alt-text="Screenshot of the 'Prepare' button on the 'Prepare resources' pane." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/prepare-resource-group.png":::
-2. In **Prepare resources**, click **Prepare**.
+1. In **Prepare resources**, select **Prepare**.
> [!NOTE]
-> After preparing the resource group, it's in the *Initiate move pending* state.
+> After you've prepared the move, the resource group status changes to *Initiate move pending*.
### Move the source resource group
-Initiate the move as follows:
+Begin moving the source resource group by doing the following:
-1. In **Across regions**, select the resource group > **Initiate Move**
+1. On the **Across regions** pane, select the resource group, and then select **Initiate move**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/initiate-move-resource-group.png" alt-text="Button to initiate move." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/initiate-move-resource-group.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/initiate-move-resource-group.png" alt-text="Screenshot of the 'Initiate move' button on the 'Across regions' pane." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/initiate-move-resource-group.png":::
-2. ln **Move Resources**, click **Initiate move**. The resource group moves into an *Initiate move in progress* state.
-3. After initiating the move, the target resource group is created, based on the generated ARM template. The source resource group moves into a *Commit move pending* state.
+1. On the **Move resources** pane, select **Initiate move**. The resource group status changes to *Initiate move in progress*.
+1. After you initiate the move, the target resource group is created, based on the generated ARM template. The source resource group status changes to *Commit move pending*.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resource-group-commit-move-pending.png" alt-text="Review the commit move pending state." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/resource-group-commit-move-pending.png":::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resource-group-commit-move-pending.png" alt-text="Screenshot of the 'Move resources' pane showing the resource group status changed to 'Commit move pending'." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/resource-group-commit-move-pending.png":::
-To commit and finish the move process:
+To commit the move and finish the process, do the following:
-1. In **Across regions**, select the resource group > **Commit move**.
-2. ln **Move Resources**, click **Commit**.
+1. On the **Across regions** pane, select the resource group, and then select **Commit move**.
+1. On the **Move Resources** pane, select **Commit**.
> [!NOTE]
-> After committing the move, the source resource group is in a *Delete source pending* state.
+> After you've committed the move, the source resource group status changes to *Delete source pending*.
## Prepare resources to move
-Now that the encryption resources and the source resource group are moved, you can prepare to move other resources that are in the *Prepare pending* state.
+Now that the encryption resources and the source resource group are moved, you can prepare to move other resources whose current status is *Prepare pending*.
-1. In **Across regions**, validate again and resolve any issues.
-2. If you want to edit target settings before beginning the move, select the link in the **Destination configuration** column for the resource, and edit the settings. If you edit the target VM settings, the target VM size shouldn't be smaller than the source VM size.
-3. Select **Prepare** for resources in the *Prepare pending* state that you want to move.
-3. In **Prepare resources**, select **Prepare**
+1. On the **Across regions** pane, validate the move again and resolve any issues.
+1. If you want to edit the target settings before you begin the move, select the link in the **Destination configuration** column for the resource, and then edit the settings. If you edit the target VM settings, the target VM size shouldn't be smaller than the source VM size.
+1. For resources with a *Prepare pending* status that you want to move, select **Prepare**.
+1. On the **Prepare resources** pane, select **Prepare**.
- - During the prepare process, the Azure Site Recovery Mobility agent is installed on VMs, to replicate them.
- - VM data is replicated periodically to the target region. This doesn't affect the source VM.
+ - During the preparation, the Azure Site Recovery mobility agent is installed on the VMs to replicate them.
+ - The VM data is replicated periodically to the target region. This doesn't affect the source VM.
- Resource Move generates ARM templates for the other source resources.
-After preparing resources, they're in an *Initiate move pending* state.
+> [!NOTE]
+> After you've prepared the resources, their status changes to *Initiate move pending*.
## Initiate the move
-With resources prepared, you can now initiate the move.
+Now that you've prepared the resources prepared, you can initiate the move.
-1. In **Across regions**, select resources with state *Initiate move pending*. Then click **Initiate move**.
-2. In **Move resources**, click **Initiate move**.
-3. Track move progress in the notifications bar.
+1. On the **Across regions** pane, select the resources whose status is *Initiate move pending*, and then select **Initiate move**.
+1. On the **Move resources** pane, select **Initiate move**.
+1. Track the progress of the move in the notifications bar.
- For VMs, replica VMs are created in the target region. The source VM is shut down, and some downtime occurs (usually minutes).
- - Resource Mover recreates other resources using the ARM templates that were prepared. There's usually no downtime.
- - After moving resources, they're in an *Commit move pending* state.
+ - Resource Mover re-creates other resources by using the prepared ARM templates. There's usually no downtime.
+ - After you've moved the resources, their status changes to *Commit move pending*.
## Discard or commit?
-After the initial move, you can decide whether you want to commit the move, or to discard it.
+After the initial move, you can decide whether to commit the move or discard it.
-- **Discard**: You might discard a move if you're testing, and you don't want to actually move the source resource. Discarding the move returns the resource to a state of *Initiate move pending*.-- **Commit**: Commit completes the move to the target region. After committing, a source resource will be in a state of *Delete source pending*, and you can decide if you want to delete it.
+- **Discard**: You might discard a move if you're testing it and don't want to actually move the source resource. Discarding the move returns the resource to *Initiate move pending* status.
+- **Commit**: Commit completes the move to the target region. After you've committed a source resource, its status changes to *Delete source pending*, and you can decide whether you want to delete it.
## Discard the move
-You can discard the move as follows:
+To discard the move, do the following:
-1. In **Across regions**, select resources with state *Commit move pending*, and click **Discard move**.
-2. In **Discard move**, click **Discard**.
-3. Track move progress in the notifications bar.
+1. On the **Across regions** pane, select resources whose status is *Commit move pending*, and then select **Discard move**.
+1. On the **Discard move** pane, select **Discard**.
+1. Track the progress of the move in the notifications bar.
> [!NOTE]
-> After discarding resources, VMs are in an *Initiate move pending* state.
+> After you've discarded the resources, The VM statuses change to *Initiate move pending*.
## Commit the move
-If you want to complete the move process, commit the move.
+To complete the move process, you commit the move by doing the following:
-1. In **Across regions**, select resources with state *Commit move pending*, and click **Commit move**.
-2. In **Commit resources**, click **Commit**.
+1. On the **Across regions** pane, select resources whose status is *Commit move pending*, and then select **Commit move**.
+1. On the **Commit resources** pane, select **Commit**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resources-commit-move.png" alt-text="Page to commit resources to finalize move." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/resources-commit-move.png" :::
+ :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resources-commit-move.png" alt-text="Screenshot of a list of resources to commit resources to finalize the move." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/resources-commit-move.png" :::
-3. Track the commit progress in the notifications bar.
+1. Track the commit progress in the notifications bar.
> [!NOTE]
-> - After committing the move, VMs stop replicating. The source VM isn't impacted by the commit.
-> - Commit doesn't impact source networking resources.
-> - After committing the move, resources are in a *Delete source pending* state.
+> - After you've committed the move, the VMs stop replicating. The source VM is unaffected by the commit.
+> - The commit process doesn't affect the source networking resources.
+> - After you've committed the move, the resource statuses change to *Delete source pending*.
## Configure settings after the move -- The Mobility service isn't uninstalled automatically from VMs. Uninstall it manually, or leave it if you plan to move the server again.-- Modify Azure role-based access control (Azure RBAC) rules after the move.
+- The mobility service isn't uninstalled automatically from VMs. Uninstall it manually, or leave it if you plan to move the server again.
+- Modify Azure role-based access control (RBAC) rules after the move.
## Delete source resources after commit After the move, you can optionally delete resources in the source region.
-1. In **Across Regions**, select each source resource that you want to delete. then select **Delete source**.
-2. In **Delete source**, review what you're intending to delete, and in **Confirm delete**, type **yes**. The action is irreversible, so check carefully!
-3. After typing **yes**, select **Delete source**.
+1. On the **Across regions** pane, select each source resource that you want to delete, and then select **Delete source**.
+1. In **Delete source**, review what you intend to delete and, in **Confirm delete**, type **yes**. The action is irreversible, so check carefully!
+1. After you type **yes**, select **Delete source**.
> [!NOTE]
-> In the Resource Move portal, you can't delete resource groups, key vaults, or SQL Server servers. You need to delete these individually from the properties page for each resource.
+> In the Resource Move portal, you can't delete resource groups, key vaults, or SQL Server instances. You need to delete each individually from the properties page for each resource.
-## Delete additional resources created for move
+## Delete resources that you created for the move
-After the move, you can manually delete the move collection, and Site Recovery resources that were created.
+After the move, you can manually delete the move collection and Site Recovery resources that you created during this process.
- The move collection is hidden by default. To see it you need to turn on hidden resources. - The cache storage has a lock that must be deleted, before it can be deleted.
-Delete as follows:
+To delete your resources, do the following:
1. Locate the resources in resource group ```RegionMoveRG-<sourceregion>-<target-region>```.
-2. Check that all the VM and other source resources in the source region have been moved or deleted. This ensures that there are no pending resources using them.
-2. Delete the resources:
+1. Check to ensure that all the VMs and other source resources in the source region have been moved or deleted. This step ensures that there are no pending resources using them.
+1. Delete the resources:
- - The move collection name is ```movecollection-<sourceregion>-<target-region>```.
- - The cache storage account name is ```resmovecache<guid>```
- - The vault name is ```ResourceMove-<sourceregion>-<target-region>-GUID```.
+ - Move collection name: ```movecollection-<sourceregion>-<target-region>```
+ - Cache storage account name: ```resmovecache<guid>```
+ - Vault name: ```ResourceMove-<sourceregion>-<target-region>-GUID```
## Next steps In this tutorial, you:
In this tutorial, you:
> * Moved encrypted Azure VMs and their dependent resources to another Azure region.
-Now, trying moving Azure SQL databases and elastic pools to another region.
+As a next step, try moving Azure SQL databases and elastic pools to another region.
> [!div class="nextstepaction"] > [Move Azure SQL resources](./tutorial-move-region-sql.md)
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/route-server-faq.md
Previously updated : 03/29/2021 Last updated : 04/16/2021
Azure Route Server is a fully managed service that allows you to easily manage r
No. Azure Route Server is a service designed with high availability. If it's deployed in an Azure region that supports [Availability Zones](../availability-zones/az-overview.md), it will have zone-level redundancy.
+### How many route servers can I create in a virtual network?
+
+You can create only one route server in a VNet. It must be deployed in a designated subnet called *RouteServerSubnet*.
+
+### Does Azure Route Server support VNet Peering?
+
+Yes. If you peer a VNet hosting the Azure Route Server to another VNet and you enable Use Remote Gateway on the latter VNet, Azure Route Server will learn the address spaces of that VNet and send them to all the peered NVAs. It will also program the routes from the NVAs into the routing table of the VMs in the peered VNet.
++ ### <a name = "protocol"></a>What routing protocols does Azure Route Server support? Azure Route Server supports Border Gateway Protocol (BGP) only. Your NVA needs to support multi-hop external BGP because youΓÇÖll need to deploy Azure Route Server in a dedicated subnet in your virtual network. The [ASN](https://en.wikipedia.org/wiki/Autonomous_system_(Internet)) you choose must be different from the one Azure Route Server uses when you configure the BGP on your NVA. ### Does Azure Route Server route data traffic between my NVA and my VMs?
-No. Azure Route Server only exchanges BGP routes with your NVA. The data traffic goes directly from the NVA to the chosen VM and directly from the VM to the NVA.
+No. Azure Route Server only exchanges BGP routes with your NVA. The data traffic goes directly from the NVA to the destination VM and directly from the VM to the NVA.
### Does Azure Route Server store customer data? No. Azure Route Server only exchanges BGP routes with your NVA and then propagates them to your virtual network.
-### If Azure Route Server receives the same route from more than one NVA, will it program all copies of the route (but each with a different next hop) to the VMs in the virtual network?
+### If Azure Route Server receives the same route from more than one NVA, how does it handle them?
-Yes, only if the route has the same AS path length. When the VMs send traffic to the destination of this route, the VM hosts will do Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs. Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
+If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the VMs in the virtual network. When the VMs send traffic to the destination of this route, the VM hosts will do Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
-### Does Azure Route Server support VNet Peering?
+### Does Azure Route Server preserve the BGP communities of the route it receives?
-Yes. If you peer a VNet hosting the Azure Route Server to another VNet and you enable Use Remote Gateway on that VNet. Azure Route Server will learn the address spaces of that VNet and send them to all the peered NVAs.
+Yes, Azure Route Server propagates the route with the BGP communities as is.
### What Autonomous System Numbers (ASNs) can I use?
security-center Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/deploy-vulnerability-assessment-vm.md
The vulnerability scanner extension works as follows:
>[!IMPORTANT] > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following URLs to your allow lists (via port 443 - the default for HTTPS): >
- > - https://qagpublic.qg3.apps.qualys.com - Qualys' US data center
- > - https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center
+ > - 'https://qagpublic.qg3.apps.qualys.com' - Qualys' US data center
+ > - 'https://qagpublic.qg2.apps.qualys.eu' - Qualys' European data center
> > If your machine is in a European Azure region, its artifacts will be processed in Qualys' European data center. Artifacts for virtual machines located elsewhere are sent to the US data center.
The Azure Security Center vulnerability assessment extension (powered by Qualys)
During setup, Security Center checks to ensure that the machine can communicate with the following two Qualys data centers (via port 443 - the default for HTTPS): -- https://qagpublic.qg3.apps.qualys.com - Qualys' US data center-- https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center
+- 'https://qagpublic.qg3.apps.qualys.com' - Qualys' US data center
+- 'https://qagpublic.qg2.apps.qualys.eu' - Qualys' European data center
The extension doesn't currently accept any proxy configuration details.
security-center Security Center Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-pricing.md
This data is a daily rate averaged across all nodes. So even if some machines se
### What data types are included in the 500-MB data daily allowance?
-Security Center's billing is closely tied to the billing for Log Analytics. Security Center provides a 500 MB/node/day allocation against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category.md#security):
+Security Center's billing is closely tied to the billing for Log Analytics. Security Center provides a 500 MB/node/day allocation against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
- WindowsEvent - SecurityAlert - SecurityBaseline
This article explained Security Center's pricing options. For related material,
- [How to optimize your Azure workload costs](https://azure.microsoft.com/blog/how-to-optimize-your-azure-workload-costs/) - [Pricing details in your currency of choice, and according to your region](https://azure.microsoft.com/pricing/details/security-center/)-- You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. [Solution targeting](../azure-monitor/insights/solution-targeting.md) allows you to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Security Center lists the workspace as not having a solution.
+- You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. [Solution targeting](../azure-monitor/insights/solution-targeting.md) allows you to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Security Center lists the workspace as not having a solution.
sentinel Bring Your Own Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/bring-your-own-ml.md
az monitor log-analytics workspace data-export list --resource-group "RG_NAME" -
### Export custom data
-For custom data that is not supported by Log Analytics auto-export, you can use Logic App or other solutions to move your data. You can refer to the [Exporting Log Analytics Data to Blob Store](https://www.borninthecloud.com/exporting-log-analytics-data-to-blob-store/?preview=true) blog and script.
+For custom data that is not supported by Log Analytics auto-export, you can use Logic App or other solutions to move your data. You can refer to the [Exporting Log Analytics Data to Blob Store](https://techcommunity.microsoft.com/t5/azure-monitor/log-analytics-data-export-preview/ba-p/1783530) blog and script.
### Correlate with data outside of Azure Sentinel
Once you've set up the analytics rule based on the ML results, if there are resu
In this document, you learned how to use Azure Sentinel's BYO-ML platform for creating or importing your own machine learning algorithms to analyze data and detect threats. -- See posts about machine learning and lots of other relevant topics in the [Azure Sentinel Blog](https://aka.ms/azuresentinelblog).
+- See posts about machine learning and lots of other relevant topics in the [Azure Sentinel Blog](https://aka.ms/azuresentinelblog).
sentinel Connect Alcide Kaudit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-alcide-kaudit.md
Alcide kAudit can export logs directly to Azure Sentinel.
1. Select **Alcide kAudit** from the gallery, and then click the **Open connector page** button.
-1. Follow the step-by-step instructions provided in the [Alcide kAudit Installation Guide](https://get.alcide.io/hubfs/Azure%20Sentinel%20Integration%20with%20kAudit.pdf).
+1. Follow the step-by-step instructions provided in the [Alcide kAudit Installation Guide](https://awesomeopensource.com/project/alcideio/kaudit?categoryPage=29#before-installing-alcide-kaudit).
1. When asked for the Workspace ID and the Primary Key, you can copy them from the Alcide kAudit data connector page.
site-recovery Site Recovery Citrix Xenapp And Xendesktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-citrix-xenapp-and-xendesktop.md
Last updated 11/27/2018
-# set up disaster recovery for a multi-tier Citrix XenApp and XenDesktop deployment
+# End of support for disaster recovery of Citrix workloads
--
-Citrix XenDesktop is a desktop virtualization solution that delivers desktops and applications as an ondemand service to any user, anywhere. With FlexCast delivery technology, XenDesktop can quickly and
-securely deliver applications and desktops to users.
-Today, Citrix XenApp does not provide any disaster recovery capabilities.
-
-A good disaster recovery solution, should allow modeling of recovery plans around the above complex application architectures and also have the ability to add customized steps to handle application mappings between various tiers hence providing a single-click sure shot solution in the event of a disaster leading to a lower RTO.
-
-This document provides step-by-step guidance for building a disaster recovery solution for your on-premises Citrix XenApp deployments on Hyper-V and VMware vSphere platforms. This document also describes how to perform a test failover(disaster recovery drill) and unplanned
-failover to Azure using recovery plans, the supported configurations and prerequisites.
--
-## Prerequisites
-
-Before you start, make sure you understand the following:
-
-1. [Replicating a virtual machine to Azure](./vmware-azure-tutorial.md)
-1. How to [design a recovery network](./concepts-on-premises-to-azure-networking.md)
-1. [Doing a test failover to Azure](site-recovery-test-failover-to-azure.md)
-1. [Doing a failover to Azure](site-recovery-failover.md)
-1. How to [replicate a domain controller](site-recovery-active-directory.md)
-1. How to [replicate SQL Server](site-recovery-sql.md)
-
-## Deployment patterns
-
-A Citrix XenApp and XenDesktop farm typically have the following deployment pattern:
-
-**Deployment pattern**
-
-Citrix XenApp and XenDesktop deployment with AD DNS server, SQL database server, Citrix Delivery Controller, StoreFront server, XenApp Master (VDA), Citrix XenApp License Server
-
-![Deployment Pattern 1](./media/site-recovery-citrix-xenapp-and-xendesktop/citrix-deployment.png)
--
-## Site Recovery support
-
-For the purpose of this article, Citrix deployments on VMware virtual machines managed by vSphere 6.0 / System Center VMM 2012 R2 were used to setup DR.
-
-### Source and target
-
-**Scenario** | **To a secondary site** | **To Azure**
- | |
-**Hyper-V** | Not in scope | Yes
-**VMware** | Not in scope | Yes
-**Physical server** | Not in scope | Yes
-
-### Versions
-Customers can deploy XenApp components as Virtual Machines running on Hyper-V or VMware or as
-Physical Servers. Azure Site Recovery can protect both physical and virtual deployments to Azure.
-Since XenApp 7.7 or later is supported in Azure, only deployments with these versions can be failed over to Azure for Disaster Recovery or migration.
-
-### Things to keep in mind
-
-1. Protection and recovery of on-premises deployments using Server OS machines to deliver XenApp published apps and XenApp published desktops is supported.
-
-2. Protection and recovery of on-premises deployments using desktop OS machines to deliver Desktop VDI for client virtual desktops, including Windows 10, is not supported. This is because Site Recovery does not support the recovery of machines with desktop OSΓÇÖes. Also, some client virtual desktop operating systems (eg. Windows 7) are not yet supported for licensing in Azure. [Learn More](https://azure.microsoft.com/pricing/licensing-faq/) about licensing for client/server desktops in Azure.
-
-3. Azure Site Recovery cannot replicate and protect existing on-premises MCS or PVS clones.
-You need to recreate these clones using Azure RM provisioning from Delivery controller.
-
-4. NetScaler cannot be protected using Azure Site Recovery as NetScaler is based on FreeBSD and Azure Site Recovery does not support protection of FreeBSD OS. You would need to deploy and configure a new NetScaler appliance from Azure Market place after failover to Azure.
--
-## Replicating virtual machines
-
-The following components of the Citrix XenApp deployment need to be protected to enable replication and recovery.
-
-* Protection of AD DNS server
-* Protection of SQL database server
-* Protection of Citrix Delivery Controller
-* Protection of StoreFront server.
-* Protection of XenApp Master (VDA)
-* Protection of Citrix XenApp License Server
--
-**AD DNS server replication**
-
-Please refer to [Protect Active Directory and DNS with Azure Site Recovery](site-recovery-active-directory.md) on guidance for replicating and configuring a domain controller in Azure.
-
-**SQL database Server replication**
-
-Please refer to [Protect SQL Server with SQL Server disaster recovery and Azure Site Recovery](site-recovery-sql.md) for detailed technical guidance on the recommended options for protecting SQL servers.
-
-Follow [this guidance](./vmware-azure-tutorial.md) to start replicating the other component virtual machines to Azure.
-
-![Protection of XenApp Components](./media/site-recovery-citrix-xenapp-and-xendesktop/citrix-enablereplication.png)
-
-**Compute and Network Settings**
-
-After the machines are protected (status shows as ΓÇ£ProtectedΓÇ¥ under Replicated Items), the Compute and Network settings need to be configured.
-In Compute and Network > Compute properties, you can specify the Azure VM name and target size.
-Modify the name to comply with Azure requirements if you need to. You can also view and add information about the target network, subnet, and IP address that will be assigned to the Azure VM.
-
-Note the following:
-
-* You can set the target IP address. If you don't provide an address, the failed over machine will use DHCP. If you set an address that isn't available at failover, the failover won't work. The same target IP address can be used for test failover if the address is available in the test failover network.
-
-* For the AD/DNS server, retaining the on-premises address lets you specify the same address as the DNS server for the Azure Virtual network.
-
-The number of network adapters is dictated by the size you specify for the target virtual machine, as follows:
-
-* If the number of network adapters on the source machine is less than or equal to the number of adapters allowed for the target machine size, then the target will have the same number of adapters as the source.
-* If the number of adapters for the source virtual machine exceeds the number allowed for the target size then the target size maximum will be used.
-* For example, if a source machine has two network adapters and the target machine size supports four, the target machine will have two adapters. If the source machine has two adapters but the supported target size only supports one then the target machine will have only one adapter.
-* If the virtual machine has multiple network adapters they will all connect to the same network.
-* If the virtual machine has multiple network adapters, then the first one shown in the list becomes the Default network adapter in the Azure virtual machine.
--
-## Creating a recovery plan
-
-After replication is enabled for the XenApp component VMs, the next step is to create a recovery plan.
-A recovery plan groups together virtual machines with similar requirements for failover and recovery.
-
-**Steps to create a recovery plan**
-
-1. Add the XenApp component virtual machines in
-the Recovery Plan.
-2. Click Recovery Plans -> + Recovery Plan. Provide an intuitive name for the recovery plan.
-3. For VMware virtual machines: Select source as VMware process server, target as Microsoft Azure,
-and deployment model as Resource Manager and click on Select items.
-4. For Hyper-V virtual machines:
-Select source as VMM server, target as Microsoft Azure, and deployment model as Resource Manager and
-click on Select items and then select the XenApp deployment VMs.
-
-### Adding virtual machines to failover groups
-
-Recovery plans can be customized to add failover groups for specific startup order, scripts or manual actions. The following groups need to be added to the recovery plan.
-
-1. Failover Group1: AD DNS
-2. Failover Group2: SQL Server VMs
-2. Failover Group3: VDA Master Image VM
-3. Failover Group4: Delivery Controller and StoreFront server VMs
--
-### Adding scripts to the recovery plan
-
-Scripts can be run before or after a specific group in a recovery plan. Manual actions can also be included and performed during failover.
-
-The customized recovery plan looks like the below:
-
-1. Failover Group1: AD DNS
-2. Failover Group2: SQL Server VMs
-3. Failover Group3: VDA Master Image VM
-
- >[!NOTE]
- >Steps 4, 6 and 7 containing manual or script actions are applicable to only an on-premises XenApp >environment with MCS/PVS catalogs.
-
-4. Group 3 Manual or script action: Shut down master VDA VM.
-The Master VDA VM when failed over to Azure will be in a running state. To create new MCS
-catalogs using Azure hosting, the master VDA VM is required to be in Stopped (de allocated)
-state. Shutdown the VM from Azure portal.
-
-5. Failover Group4: Delivery Controller and StoreFront server VMs
-6. Group3 manual or script action 1:
-
- ***Add Azure RM host connection***
-
- Create Azure host connection in Delivery Controller machine to provision new MCS
- catalogs in Azure. Follow the steps as explained in this [article](https://www.citrix.com/blogs/2016/07/21/connecting-to-azure-resource-manager-in-xenapp-xendesktop/).
-
-7. Group3 manual or script action 2:
-
- ***Re-create MCS Catalogs in Azure***
-
- The existing MCS or PVS clones on the primary site will not be replicated to Azure. You need to recreate these clones using the replicated master VDA and Azure provisioning from Delivery controller. Follow the steps as explained in this [article](https://www.citrix.com/blogs/2016/09/12/using-xenapp-xendesktop-in-azure-resource-manager/) to create MCS catalogs in Azure.
-
-![Recovery plan for XenApp Components](./media/site-recovery-citrix-xenapp-and-xendesktop/citrix-recoveryplan.png)
--
- >[!NOTE]
- >You can use scripts at [location](https://github.com/Azure/azure-quickstart-templates/tree/master/asr-automation-recovery/scripts) to update the DNS with the new IPs of the failed over >virtual machines or to attach a load balancer on the failed over virtual machine, if needed.
--
-## Doing a test failover
-
-Follow [this guidance](site-recovery-test-failover-to-azure.md) to do a test failover.
-
-![Recovery Plan](./media/site-recovery-citrix-xenapp-and-xendesktop/citrix-tfo.png)
--
-## Doing a failover
-
-Follow [this guidance](site-recovery-failover.md) when you are doing a failover.
-
-## Next steps
-
-You can [learn more](https://aka.ms/citrix-xenapp-xendesktop-with-asr) about replicating Citrix XenApp and XenDesktop deployments in this white paper. Look at the guidance to [replicate other applications](site-recovery-workload.md) using Site Recovery.
+As of March 2020, Citrix has announced deprecation and end-of-support for public cloud hosted workloads. Therefore, we do not recommend using Site Recovery for protecting Citrix workloads.
site-recovery Site Recovery Workload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-workload.md
Site Recovery can replicate any app running on a supported machine. We've partne
| Linux (operating system and apps) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft)| | Dynamics AX |Yes |Yes |Yes |Yes |Yes| | Windows File Server |Yes |Yes |Yes |Yes |Yes|
-| Citrix XenApp and XenDesktop |Yes|N/A |Yes |N/A |Yes |
+| Citrix XenApp and XenDesktop |No|N/A |No |N/A |No |
## Replicate Active Directory and DNS
Azure Site Recovery provides disaster recovery by replicating the critical compo
## Protect Citrix XenApp and XenDesktop
-Use Site Recovery to protect your Citrix XenApp and XenDesktop deployments, as follows:
--- Enable protection of the Citrix XenApp and XenDesktop deployment. Replicate the different deployment layers to Azure: Active Directory, DNS server, SQL database server, Citrix Delivery Controller, StoreFront server, XenApp Master (VDA), Citrix XenApp License Server.-- Simplify cloud migration, by using Site Recovery to migrate your Citrix XenApp and XenDesktop deployment to Azure.-- Simplify Citrix XenApp/XenDesktop testing, by creating a production-like copy on-demand for testing and debugging.-- This solution only applies to Windows Server virtual desktops and not client virtual desktops. Client virtual desktops aren't yet supported for licensing in Azure. [Learn More](https://azure.microsoft.com/pricing/licensing-faq/) about licensing for client/server desktops in Azure.-
-[Learn more](site-recovery-citrix-xenapp-and-xendesktop.md) about disaster recovery for Citrix XenApp and XenDesktop deployments. Or, you can refer to the [Citrix whitepaper](https://aka.ms/citrix-xenapp-xendesktop-with-asr).
+As of March 2020, Citrix has announced deprecation and end-of-support for public cloud hosted workloads. Therefore, we do not recommend using Site Recovery for protecting Citrix workloads.
## Next steps
spatial-anchors Coarse Reloc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/concepts/coarse-reloc.md
This table estimates the expected search space for each sensor type:
| **BLE beacons** | 70 m | Determined by the range of the beacon. Depends on the frequency, transmission strength, physical obstructions, interference, and so on. | <!-- Reference links in article -->
-[1]: https://developers.google.com/beacons/eddystone
+[1]: https://developer.estimote.com/eddystone/
[2]: https://developer.apple.com/ibeacon/ [3]: https://developer.android.com/reference/android/location/LocationManager [4]: https://developer.apple.com/documentation/corelocation/cllocationmanager?language=objc
storage Storage Sync Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-planning.md
When Data Deduplication is enabled on a volume with cloud tiering enabled, Dedup
Note the volume savings only apply to the server; your data in the Azure file share will not be deduped. > [!Note]
-> To support Data Deduplication on volumes with cloud tiering enabled on Windows Server 2019, Windows update [KB4520062](https://support.microsoft.com/help/4520062) must be installed and Azure File Sync agent version 9.0.0.0 or newer is required.
+> To support Data Deduplication on volumes with cloud tiering enabled on Windows Server 2019, Windows update [KB4520062 - October 2019](https://support.microsoft.com/help/4520062) or a later monthly rollup update must be installed and Azure File Sync agent version 12.0.0.0 or newer is required.
**Windows Server 2012 R2** Azure File Sync does not support Data Deduplication and cloud tiering on the same volume on Windows Server 2012 R2. If Data Deduplication is enabled on a volume, cloud tiering must be disabled.
storage Storage Sync Files Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-troubleshoot.md
StorageSyncAgent.msi /l*v AFSInstaller.log
Review installer.log to determine the cause of the installation failure.
+<a id="agent-installation-gpo"></a>**Agent installation fails with error: Storage Sync Agent Setup Wizard ended prematurely because of an error**
+
+In the agent installation log, the following error is logged:
+
+```
+CAQuietExec64: + CategoryInfo : SecurityError: (:) , PSSecurityException
+CAQuietExec64: + FullyQualifiedErrorId : UnauthorizedAccess
+CAQuietExec64: Error 0x80070001: Command line returned an error.
+```
+
+This issue occurs if the [PowerShell execution policy](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) is configured using group policy and the policy setting is "Allow only signed scripts." All scripts included with the Azure File Sync agent are signed. The Azure File Sync agent installation fails because the installer is performing the script execution using the Bypass execution policy setting.
+
+To resolve this issue, temporarily disable the [Turn on Script Execution](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) group policy setting on the server. Once the agent installation completes, the group policy setting can be re-enabled.
+ <a id="agent-installation-on-DC"></a>**Agent installation fails on Active Directory Domain Controller** If you try to install the sync agent on an Active Directory domain controller where the PDC role owner is on a Windows Server 2008 R2 or below OS version, you may hit the issue where the sync agent will fail to install.
The table below contains all of the unicode characters Azure File Sync does not
| Character set | Character count | ||--|
+| 0x00000000 - 0x0000001F (control characters) | 32 |
+| <ul><li>0x00000022 (quotation mark)</li><li>0x0000002A (asterisk)</li><li>0x0000002F (forward slash)</li><li>0x0000003A (colon)</li><li>0x0000003C (less than)</li><li>0x0000003E (greater than)</li><li>0x0000003F (question mark)</li><li>0x0000005C (backslash)</li><li>0x0000007C (pipe or bar)</li></ul> | 9 |
+| <ul><li>0x0004FFFE - 0x0004FFFF = 2 (noncharacter)</li><li>0x0008FFFE - 0x0008FFFF = 2 (noncharacter)</li><li>0x000CFFFE - 0x000CFFFF = 2 (noncharacter)</li><li>0x0010FFFE - 0x0010FFFF = 2 (noncharacter)</li></ul> | 8 |
| <ul><li>0x0000009D (osc operating system command)</li><li>0x00000090 (dcs device control string)</li><li>0x0000008F (ss3 single shift three)</li><li>0x00000081 (high octet preset)</li><li>0x0000007F (del delete)</li><li>0x0000008D (ri reverse line feed)</li></ul> | 6 |
-| 0x0000FDD0 - 0x0000FDEF (Arabic presentation forms-a) | 32 |
-| 0x0000FFF0 - 0x0000FFFF (specials) | 16 |
-| <ul><li>0x0001FFFE - 0x0001FFFF = 2 (noncharacter)</li><li>0x0002FFFE - 0x0002FFFF = 2 (noncharacter)</li><li>0x0003FFFE - 0x0003FFFF = 2 (noncharacter)</li><li>0x0004FFFE - 0x0004FFFF = 2 (noncharacter)</li><li>0x0005FFFE - 0x0005FFFF = 2 (noncharacter)</li><li>0x0006FFFE - 0x0006FFFF = 2 (noncharacter)</li><li>0x0007FFFE - 0x0007FFFF = 2 (noncharacter)</li><li>0x0008FFFE - 0x0008FFFF = 2 (noncharacter)</li><li>0x0009FFFE - 0x0009FFFF = 2 (noncharacter)</li><li>0x000AFFFE - 0x000AFFFF = 2 (noncharacter)</li><li>0x000BFFFE - 0x000BFFFF = 2 (noncharacter)</li><li>0x000CFFFE - 0x000CFFFF = 2 (noncharacter)</li><li>0x000DFFFE - 0x000DFFFF = 2 (noncharacter)</li><li>0x000EFFFE - 0x000EFFFF = 2 (undefined)</li><li>0x000FFFFE - 0x000FFFFF = 2 (supplementary private use area)</li></ul> | 30 |
-| 0x0010FFFE, 0x0010FFFF | 2 |
+| 0x0000FFF0, 0x0000FFFD, 0x0000FFFE, 0x0000FFFF (specials) | 4 |
+| Files or directories that end with a period | 1 |
### Common sync errors <a id="-2147023673"></a>**The sync session was canceled.**
stream-analytics Manage Jobs Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/manage-jobs-cluster.md
Previously updated : 09/22/2020 Last updated : 04/16/2021 # Add and Remove jobs in an Azure Stream Analytics cluster
synapse-analytics Performance Tuning Ordered Cci https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/performance-tuning-ordered-cci.md
FROM sys.pdw_nodes_partitions AS pnp
JOIN sys.pdw_nodes_column_store_segments AS cls ON pnp.partition_id = cls.partition_id AND pnp.distribution_id = cls.distribution_id JOIN sys.columns as cols ON o.object_id = cols.object_id AND cls.column_id = cols.column_id WHERE o.name = '<Table Name>' and cols.name = '<Column Name>' and TMap.physical_name not like '%HdTable%'
-ORDER BY o.name, pnp.distribution_id, cls.min_data_id
+ORDER BY o.name, pnp.distribution_id, cls.min_data_id;
```
In this example, table T1 has a clustered columnstore index ordered in the seque
```sql CREATE CLUSTERED COLUMNSTORE INDEX MyOrderedCCI ON T1
-ORDER (Col_C, Col_B, Col_A)
+ORDER (Col_C, Col_B, Col_A);
```
Creating an ordered CCI is an offline operation. For tables with no partitions,
> For a dedicated SQL pool table with an ordered CCI, ALTER INDEX REBUILD will re-sort the data using tempdb. Monitor tempdb during rebuild operations. If you need more tempdb space, scale up the pool. Scale back down once the index rebuild is complete. > > For a dedicated SQL pool table with an ordered CCI, ALTER INDEX REORGANIZE does not re-sort the data. To resort data, use ALTER INDEX REBUILD.
+>
+> For more information on ordered CCI maintenance, see [Optimizing clustered columnstore indexes](sql-data-warehouse-tables-index.md#optimizing-clustered-columnstore-indexes).
## Examples
Creating an ordered CCI is an offline operation. For tables with no partitions,
SELECT object_name(c.object_id) table_name, c.name column_name, i.column_store_order_ordinal FROM sys.index_columns i JOIN sys.columns c ON i.object_id = c.object_id AND c.column_id = i.column_id
-WHERE column_store_order_ordinal <>0
+WHERE column_store_order_ordinal <>0;
``` **B. To change column ordinal, add or remove columns from the order list, or to change from CCI to ordered CCI:** ```sql
-CREATE CLUSTERED COLUMNSTORE INDEX InternetSales ON InternetSales
+CREATE CLUSTERED COLUMNSTORE INDEX InternetSales ON dbo.InternetSales
ORDER (ProductKey, SalesAmount)
-WITH (DROP_EXISTING = ON)
+WITH (DROP_EXISTING = ON);
``` ## Next steps
synapse-analytics Sql Data Warehouse Tables Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-index.md
Title: Indexing tables description: Recommendations and examples for indexing tables in dedicated SQL pool. - Previously updated : 03/18/2019 Last updated : 04/16/2021+
To create a table with an index, see the [CREATE TABLE (dedicated SQL pool)](/sq
## Clustered columnstore indexes
-By default, dedicated SQL pool creates a clustered columnstore index when no index options are specified on a table. Clustered columnstore tables offer both the highest level of data compression as well as the best overall query performance. Clustered columnstore tables will generally outperform clustered index or heap tables and are usually the best choice for large tables. For these reasons, clustered columnstore is the best place to start when you are unsure of how to index your table.
+By default, dedicated SQL pool creates a clustered columnstore index when no index options are specified on a table. Clustered columnstore tables offer both the highest level of data compression and the best overall query performance. Clustered columnstore tables will generally outperform clustered index or heap tables and are usually the best choice for large tables. For these reasons, clustered columnstore is the best place to start when you are unsure of how to index your table.
To create a clustered columnstore table, simply specify CLUSTERED COLUMNSTORE INDEX in the WITH clause, or leave the WITH clause off:
WITH ( CLUSTERED COLUMNSTORE INDEX );
There are a few scenarios where clustered columnstore may not be a good option: -- Columnstore tables do not support varchar(max), nvarchar(max) and varbinary(max). Consider heap or clustered index instead.
+- Columnstore tables do not support varchar(max), nvarchar(max), and varbinary(max). Consider heap or clustered index instead.
- Columnstore tables may be less efficient for transient data. Consider heap and perhaps even temporary tables. - Small tables with less than 60 million rows. Consider heap tables.
WITH ( HEAP );
## Clustered and nonclustered indexes
-Clustered indexes may outperform clustered columnstore tables when a single row needs to be quickly retrieved. For queries where a single or very few row lookup is required to perform with extreme speed, consider a cluster index or nonclustered secondary index. The disadvantage to using a clustered index is that only queries that benefit are the ones that use a highly selective filter on the clustered index column. To improve filter on other columns a nonclustered index can be added to other columns. However, each index which is added to a table adds both space and processing time to loads.
+Clustered indexes may outperform clustered columnstore tables when a single row needs to be quickly retrieved. For queries where a single or very few row lookup is required to perform with extreme speed, consider a clustered index or nonclustered secondary index. The disadvantage to using a clustered index is that only queries that benefit are the ones that use a highly selective filter on the clustered index column. To improve filter on other columns, a nonclustered index can be added to other columns. However, each index that is added to a table adds both space and processing time to loads.
To create a clustered index table, simply specify CLUSTERED INDEX in the WITH clause:
CREATE INDEX zipCodeIndex ON myTable (zipCode);
Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. Segment quality can be measured by the number of rows in a compressed row group. Segment quality is most optimal where there are at least 100K rows per compressed row group and gain in performance as the number of rows per row group approach 1,048,576 rows, which is the most rows a row group can contain.
-The below view can be created and used on your system to compute the average rows per row group and identify any sub-optimal cluster columnstore indexes. The last column on this view generates a SQL statement which can be used to rebuild your indexes.
+The below view can be created and used on your system to compute the average rows per row group and identify any sub-optimal cluster columnstore indexes. The last column on this view generates a SQL statement that can be used to rebuild your indexes.
```sql CREATE VIEW dbo.vColumnstoreDensity
WHERE COMPRESSED_rowgroup_rows_AVG < 100000
OR INVISIBLE_rowgroup_rows_AVG < 100000 ```
-Once you have run the query you can begin to look at the data and analyze your results. This table explains what to look for in your row group analysis.
+Once you have run the query, you can begin to look at the data and analyze your results. This table explains what to look for in your row group analysis.
| Column | How to use this data | | | |
Once you have run the query you can begin to look at the data and analyze your r
| [row_count_per_distribution_MAX] |If all rows are evenly distributed this value would be the target number of rows per distribution. Compare this value with the compressed_rowgroup_count. | | [COMPRESSED_rowgroup_rows] |Total number of rows in columnstore format for the table. | | [COMPRESSED_rowgroup_rows_AVG] |If the average number of rows is significantly less than the maximum # of rows for a row group, then consider using CTAS or ALTER INDEX REBUILD to recompress the data |
-| [COMPRESSED_rowgroup_count] |Number of row groups in columnstore format. If this number is very high in relation to the table it is an indicator that the columnstore density is low. |
+| [COMPRESSED_rowgroup_count] |Number of row groups in columnstore format. If this number is very high in relation to the table, it is an indicator that the columnstore density is low. |
| [COMPRESSED_rowgroup_rows_DELETED] |Rows are logically deleted in columnstore format. If the number is high relative to table size, consider recreating the partition or rebuilding the index as this removes them physically. | | [COMPRESSED_rowgroup_rows_MIN] |Use this in conjunction with the AVG and MAX columns to understand the range of values for the row groups in your columnstore. A low number over the load threshold (102,400 per partition aligned distribution) suggests that optimizations are available in the data load | | [COMPRESSED_rowgroup_rows_MAX] |As above |
Once you have run the query you can begin to look at the data and analyze your r
| [CLOSED_rowgroup_rows_AVG] |As above | | [Rebuild_Index_SQL] |SQL to rebuild columnstore index for a table |
+## Impact of index maintenance
+
+The column `Rebuild_Index_SQL` in the `vColumnstoreDensity` view contains an `ALTER INDEX REBUILD` statement that can be used to rebuild your indexes. When rebuilding your indexes, be sure that you allocate enough memory to the session that rebuilds your index. To do this, increase the [resource class](resource-classes-for-workload-management.md) of a user that has permissions to rebuild the index on this table to the recommended minimum. For an example, see [Rebuilding indexes to improve segment quality](#rebuilding-indexes-to-improve-segment-quality) later in this article.
+
+For a table with an ordered clustered columnstore index, `ALTER INDEX REBUILD` will re-sort the data using tempdb. Monitor tempdb during rebuild operations. If you need more tempdb space, scale up the database pool. Scale back down once the index rebuild is complete.
+
+For a table with an ordered clustered columnstore index, `ALTER INDEX REORGANIZE` does not re-sort the data. To re-sort data, use `ALTER INDEX REBUILD`.
+
+For more information on ordered clustered columnstore indexes, see [Performance tuning with ordered clustered columnstore index](performance-tuning-ordered-cci.md).
+ ## Causes of poor columnstore index quality If you have identified tables with poor segment quality, you want to identify the root cause. Below are some other common causes of poor segment quality:
These factors can cause a columnstore index to have significantly less than the
### Memory pressure when index was built
-The number of rows per compressed row group are directly related to the width of the row and the amount of memory available to process the row group. When rows are written to columnstore tables under memory pressure, columnstore segment quality may suffer. Therefore, the best practice is to give the session which is writing to your columnstore index tables access to as much memory as possible. Since there is a trade-off between memory and concurrency, the guidance on the right memory allocation depends on the data in each row of your table, the data warehouse units allocated to your system, and the number of concurrency slots you can give to the session which is writing data to your table.
+The number of rows per compressed row group are directly related to the width of the row and the amount of memory available to process the row group. When rows are written to columnstore tables under memory pressure, columnstore segment quality may suffer. Therefore, the best practice is to give the session that is writing to your columnstore index tables access to as much memory as possible. Since there is a trade-off between memory and concurrency, the guidance on the right memory allocation depends on the data in each row of your table, the data warehouse units allocated to your system, and the number of concurrency slots you can give to the session that is writing data to your table.
### High volume of DML operations
-A high volume of DML operations that update and delete rows can introduce inefficiency into the columnstore. This is especially true when the majority of the rows in a row group are modified.
+A high volume of DML operations that update and delete rows can introduce inefficiency into the columnstore. This is especially true when most the rows in a row group are modified.
- Deleting a row from a compressed row group only logically marks the row as deleted. The row remains in the compressed row group until the partition or table is rebuilt. - Inserting a row adds the row to an internal rowstore table called a delta row group. The inserted row is not converted to columnstore until the delta row group is full and is marked as closed. Row groups are closed once they reach the maximum capacity of 1,048,576 rows.
Once your tables have been loaded with some data, follow the below steps to iden
### Step 1: Identify or create user which uses the right resource class
-One quick way to immediately improve segment quality is to rebuild the index. The SQL returned by the above view returns an ALTER INDEX REBUILD statement which can be used to rebuild your indexes. When rebuilding your indexes, be sure that you allocate enough memory to the session that rebuilds your index. To do this, increase the resource class of a user which has permissions to rebuild the index on this table to the recommended minimum.
+One quick way to immediately improve segment quality is to rebuild the index. The SQL returned by the above view returns an ALTER INDEX REBUILD statement which can be used to rebuild your indexes. When rebuilding your indexes, be sure that you allocate enough memory to the session that rebuilds your index. To do this, increase the resource class of a user that has permissions to rebuild the index on this table to the recommended minimum.
Below is an example of how to allocate more memory to a user by increasing their resource class. To work with resource classes, see [Resource classes for workload management](resource-classes-for-workload-management.md). ```sql
-EXEC sp_addrolemember 'xlargerc', 'LoadUser'
+EXEC sp_addrolemember 'xlargerc', 'LoadUser';
``` ### Step 2: Rebuild clustered columnstore indexes with higher resource class user
-Sign in as the user from step 1 (e.g. LoadUser), which is now using a higher resource class, and execute the ALTER INDEX statements. Be sure that this user has ALTER permission to the tables where the index is being rebuilt. These examples show how to rebuild the entire columnstore index or how to rebuild a single partition. On large tables, it is more practical to rebuild indexes a single partition at a time.
+Sign in as the user from step 1 (LoadUser), which is now using a higher resource class, and execute the ALTER INDEX statements. Be sure that this user has ALTER permission to the tables where the index is being rebuilt. These examples show how to rebuild the entire columnstore index or how to rebuild a single partition. On large tables, it is more practical to rebuild indexes a single partition at a time.
Alternatively, instead of rebuilding the index, you could copy the table to a new table [using CTAS](sql-data-warehouse-develop-ctas.md). Which way is best? For large volumes of data, CTAS is usually faster than [ALTER INDEX](/sql/t-sql/statements/alter-index-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). For smaller volumes of data, ALTER INDEX is easier to use and won't require you to swap out the table. ```sql -- Rebuild the entire clustered index
-ALTER INDEX ALL ON [dbo].[DimProduct] REBUILD
+ALTER INDEX ALL ON [dbo].[DimProduct] REBUILD;
``` ```sql -- Rebuild a single partition
-ALTER INDEX ALL ON [dbo].[FactInternetSales] REBUILD Partition = 5
+ALTER INDEX ALL ON [dbo].[FactInternetSales] REBUILD Partition = 5;
``` ```sql -- Rebuild a single partition with archival compression
-ALTER INDEX ALL ON [dbo].[FactInternetSales] REBUILD Partition = 5 WITH (DATA_COMPRESSION = COLUMNSTORE_ARCHIVE)
+ALTER INDEX ALL ON [dbo].[FactInternetSales] REBUILD Partition = 5 WITH (DATA_COMPRESSION = COLUMNSTORE_ARCHIVE);
``` ```sql -- Rebuild a single partition with columnstore compression
-ALTER INDEX ALL ON [dbo].[FactInternetSales] REBUILD Partition = 5 WITH (DATA_COMPRESSION = COLUMNSTORE)
+ALTER INDEX ALL ON [dbo].[FactInternetSales] REBUILD Partition = 5 WITH (DATA_COMPRESSION = COLUMNSTORE);
``` Rebuilding an index in dedicated SQL pool is an offline operation. For more information about rebuilding indexes, see the ALTER INDEX REBUILD section in [Columnstore Indexes Defragmentation](/sql/relational-databases/indexes/columnstore-indexes-defragmentation?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), and [ALTER INDEX](/sql/t-sql/statements/alter-index-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
AND [OrderDateKey] < 20010101
ALTER TABLE [dbo].[FactInternetSales_20000101_20010101] SWITCH PARTITION 2 TO [dbo].[FactInternetSales] PARTITION 2 WITH (TRUNCATE_TARGET = ON); ```
-For more details about re-creating partitions using CTAS, see [Using partitions in dedicated SQL pool](sql-data-warehouse-tables-partition.md).
+For more information about re-creating partitions using CTAS, see [Using partitions in dedicated SQL pool](sql-data-warehouse-tables-partition.md).
## Next steps
virtual-machines Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes.md
This article describes the available sizes and options for the Azure virtual mac
| [Compute optimized](sizes-compute.md) | F, Fs, Fsv2 | High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. | | [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Ev4, Esv4, Edv4, Edsv4, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. | | [Storage optimized](sizes-storage.md) | Lsv2 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. |
-| [GPU](sizes-gpu.md) | NC, NCv2, NCv3, NCasT4_v3 (Preview), ND, NDv2 (Preview), NV, NVv3, NVv4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. |
+| [GPU](sizes-gpu.md) | NC, NCv2, NCv3, NCasT4_v3, ND, NDv2, NV, NVv3, NVv4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. |
| [High performance compute](sizes-hpc.md) | HB, HBv2, HC, H | Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). | - For information about pricing of the various sizes, see the pricing pages for [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/#Linux) or [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/Windows/#Windows).
virtual-machines Compiling Scaling Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/compiling-scaling-applications.md
Previously updated : 03/25/2021 Last updated : 04/16/2021
gcc $(OPTIMIZATIONS) $(OMP) $(STACK) $(STREAM_PARAMETERS) stream.c -o stream.gcc
## Next steps
-Learn more about [HPC](/azure/architecture/topics/high-performance-computing/) on Azure.
+- Test your knowledge with a [learning module on optimizing HPC applications on Azure](https://docs.microsoft.com/learn/modules/optimize-tightly-coupled-hpc-apps/).
+- Review the [HBv3-series overview](hbv3-series-overview.md) and [HC-series overview](hc-series-overview.md).
+- Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
+- Learn more about [HPC](/azure/architecture/topics/high-performance-computing/) on Azure.
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/configure.md
Optionally, the WALinuxAgent may be disabled as a pre-job step and enabled back
## Next steps - Learn more about [enabling InfiniBand](enable-infiniband.md) on the InfiniBand-enabled [H-series](../../sizes-hpc.md) and [N-series](../../sizes-gpu.md) VMs.-- Learn more about installing various [supported MPI libraries](setup-mpi.md) and their optimal configuration on the VMs.
+- Learn more about installing and running various [supported MPI libraries](setup-mpi.md) on the VMs.
- Review the [HBv3-series overview](hbv3-series-overview.md) and [HC-series overview](hc-series-overview.md). - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute). - For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/enable-infiniband.md
sudo systemctl restart waagent
## Next steps -- Learn more about installing various [supported MPI libraries](setup-mpi.md) and their optimal configuration on the VMs.
+- Learn more about installing and running various [supported MPI libraries](setup-mpi.md) on the VMs.
- Review the [HBv3-series overview](hbv3-series-overview.md) and [HC-series overview](hc-series-overview.md). - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute). - For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/hb-hc-known-issues.md
Previously updated : 03/25/2021 Last updated : 04/16/2021
This article attempts to list recent common issues and their solutions when using the [H-series](../../sizes-hpc.md) and [N-series](../../sizes-gpu.md) HPC and GPU VMs.
+## qp0 Access Restriction
+
+To prevent low-level hardware access that can result in security vulnerabilities, Queue Pair 0 is not accessible to guest VMs. This should only affect actions typically associated with administration of the ConnectX InfiniBand NIC, and running some InfiniBand diagnostics like ibdiagnet, but not end-user applications.
+ ## MOFED installation on Ubuntu
-On Ubuntu-18.04, the Mellanox OFED showed incompatibility with kernels version `5.4.0-1039-azure #42` and newer which causes an increase in VM boot time to about 30 minutes.
-This has been reported for both Mellanox OFED versions 5.2-1.0.4.0 and 5.2-2.2.0.0.
-The temporary solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace image or older and not to update the kernel.
-This issue is expected to be resolved with a newer MOFED (TBD).
+On Ubuntu-18.04 based marketplace VM images with kernels version `5.4.0-1039-azure #42` and newer, some older Mellanox OFED are incompatible causing an increase in VM boot time up to 30 minutes in some cases. This has been reported for both Mellanox OFED versions 5.2-1.0.4.0 and 5.2-2.2.0.0. The issue is resolved with Mellanox OFED 5.3-1.0.0.1.
+If it is necessary to use the incompatible OFED, a solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace VM image or older and not to update the kernel.
## MPI QP creation errors If in the midst of running any MPI workloads, InfiniBand QP creation errors such as shown below, are thrown, we suggest rebooting the VM and re-trying the workload. This issue will be fixed in the future.
This 'duplicate MAC with cloud-init on Ubuntu" is a known issue. This will be re
EOF ```
-## qp0 Access Restriction
-
-To prevent low-level hardware access that can result in security vulnerabilities, Queue Pair 0 is not accessible to guest VMs. This should only affect actions typically associated with administration of the ConnectX-5 NIC, and running some InfiniBand diagnostics like ibdiagnet, but not end-user applications themselves.
- ## DRAM on HB-series VMs HB-series VMs can only expose 228 GB of RAM to guest VMs at this time. Similarly, 458 GB on HBv2 and 448 GB on HBv3 VMs. This is due to a known limitation of Azure hypervisor to prevent pages from being assigned to the local DRAM of AMD CCXΓÇÖs (NUMA domains) reserved for the guest VM.
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/overview.md
Previously updated : 03/18/2021 Last updated : 04/09/2021
Fourth, for performance and scalability, optimally configure the workloads by fo
- Learn about [configuring and optimizing](configure.md) the InfiniBand enabled [H-series](../../sizes-hpc.md) and [N-series](../../sizes-gpu.md) VMs. - Review the [HBv3-series overview](hb-series-overview.md) and [HC-series overview](hc-series-overview.md) to learn about optimally configuring workloads for performance and scalability. - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
+- Test your knowledge with a [learning module on optimizing HPC applications on Azure](https://docs.microsoft.com/learn/modules/optimize-tightly-coupled-hpc-apps/).
- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/setup-mpi.md
Previously updated : 03/18/2021 Last updated : 04/16/2021
Though the examples here are for RHEL/CentOS, but the steps are general and can
> [!NOTE] > The code snippets below are examples. We recommend using the latest stable versions of the packages, or referring to the [azhpc-images repo](https://github.com/Azure/azhpc-images/blob/master/ubuntu/ubuntu-18.x/ubuntu-18.04-hpc/install_mpis.sh).
+## Choosing MPI library
+If an HPC application recommends a particular MPI library, try that version first. If you have flexibility regarding which MPI you can choose, and you want the best performance, try HPC-X. Overall, the HPC-X MPI performs the best by using the UCX framework for the InfiniBand interface, and takes advantage of all the Mellanox InfiniBand hardware and software capabilities. Additionally, HPC-X and OpenMPI are ABI compatible, so you can dynamically run an HPC application with HPC-X that was built with OpenMPI. Similarly, Intel MPI, MVAPICH, and MPICH are ABI compatible.
+
+The following figure illustrates the architecture for the popular MPI libraries.
+
+![Architecture for popular MPI libraries](./media/mpi-architecture.png)
+ ## UCX [Unified Communication X (UCX)](https://github.com/openucx/ucx) is a framework of communication APIs for HPC. It is optimized for MPI communication over InfiniBand and works with many MPI implementations such as OpenMPI and MPICH.
mv hpcx-${HPCX_VERSION}-gcc-MLNX_OFED_LINUX-5.0-1.0.0.0-redhat7.7-x86_64 ${INSTA
HPCX_PATH=${INSTALL_PREFIX}/hpcx-${HPCX_VERSION}-gcc-MLNX_OFED_LINUX-5.0-1.0.0.0-redhat7.7-x86_64 ```
-Run HPC-X
-
+The following command illustrates some recommended mpirun arguments for HPC-X and OpenMPI.
+```bash
+mpirun -n $NPROCS --hostfile $HOSTFILE --map-by ppr:$NUMBER_PROCESSES_PER_NUMA:numa:pe=$NUMBER_THREADS_PER_PROCESS -report-bindings $MPI_EXECUTABLE
+```
+where:
+
+|Parameter|Description |
+|||
+|`NPROCS` |Specifies the number of MPI processes. For example: `-n 16`.|
+|`$HOSTFILE`|Specifies a file containing the hostname or IP address, to indicate the location of where the MPI processes will run. For example: `--hostfile hosts`.|
+|`$NUMBER_PROCESSES_PER_NUMA` |Specifies the number of MPI processes that will run in each NUMA domain. For example, to specify four MPI processes per NUMA, you use `--map-by ppr:4:numa:pe=1`.|
+|`$NUMBER_THREADS_PER_PROCESS` |Specifies the number of threads per MPI process. For example, to specify one MPI process and four threads per NUMA, you use `--map-by ppr:1:numa:pe=4`.|
+|`-report-bindings` |Prints MPI processes mapping to cores, which is useful to verify that your MPI process pinning is correct.|
+|`$MPI_EXECUTABLE` |Specifies the MPI executable built linking in MPI libraries. MPI compiler wrappers do this automatically. For example: `mpicc` or `mpif90`.|
+
+An example of running the OSU latency microbenchmark is as follows:
```bash ${HPCX_PATH}mpirun -np 2 --map-by ppr:2:node -x UCX_TLS=rc ${HPCX_PATH}/ompi/tests/osu-micro-benchmarks-5.3.2/osu_latency ```
${HPCX_PATH}mpirun -np 2 --map-by ppr:2:node -x UCX_TLS=rc ${HPCX_PATH}/ompi/tes
MPI Collective communication primitives offer a flexible, portable way to implement group communication operations. They are widely used across various scientific parallel applications and have a significant impact on the overall application performance. Refer to the [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/optimizing-mpi-collective-communication-using-hpc-x-on-azurehpc/ba-p/1356740) for details on configuration parameters to optimize collective communication performance using HPC-X and HCOLL library for collective communication.
+As an example, if you suspect your tightly coupled MPI application is doing an excessive amount of collective communication, you can try enabling hierarchical collectives (HCOLL). To enable those features, use the following parameters.
+```bash
+-mca coll_hcoll_enable 1 -x HCOLL_MAIN_IB=<MLX device>:<Port>
+```
+ > [!NOTE] > With HPC-X 2.7.4+, it may be necessary to explicitly pass LD_LIBRARY_PATH if the UCX version on MOFED vs. that in HPC-X is different.
cd openmpi-${OMPI_VERSION}
./configure --prefix=${INSTALL_PREFIX}/openmpi-${OMPI_VERSION} --with-ucx=${UCX_PATH} --with-hcoll=${HCOLL_PATH} --enable-mpirun-prefix-by-default --with-platform=contrib/platform/mellanox/optimized && make -j$(nproc) && make install ```
-For optimal performance, run OpenMPI with `ucx` and `hcoll`.
+For optimal performance, run OpenMPI with `ucx` and `hcoll`. Also see the example with [HPC-X](#hpc-x).
```bash ${INSTALL_PREFIX}/bin/mpirun -np 2 --map-by node --hostfile ~/hostfile -mca pml ucx --mca btl ^vader,tcp,openib -x UCX_NET_DEVICES=mlx5_0:1 -x UCX_IB_PKEY=0x0003 ./osu_latency
Check your partition key as mentioned above.
## Intel MPI
-Download your choice of version of [Intel MPI](https://software.intel.com/mpi-library/choose-download). Change the I_MPI_FABRICS environment variable depending on the version.
+Download your choice of version of [Intel MPI](https://software.intel.com/mpi-library/choose-download). The Intel MPI 2019 release switched from the Open Fabrics Alliance (OFA) framework to the Open Fabrics Interfaces (OFI) framework, and currently supports libfabric. There are two providers for InfiniBand support: mlx and verbs.
+Change the I_MPI_FABRICS environment variable depending on the version.
- Intel MPI 2019 and 2021: use `I_MPI_FABRICS=shm:ofi`, `I_MPI_OFI_PROVIDER=mlx`. The `mlx` provider uses UCX. Usage of verbs has been found to be unstable and less performant. See the [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/intelmpi-2019-on-azure-hpc-clusters/ba-p/1403149) for more details. - Intel MPI 2018: use `I_MPI_FABRICS=shm:ofa` - Intel MPI 2016: use `I_MPI_DAPL_PROVIDER=ofa-v2-ib0`
+Here are some suggested mpirun arguments for Intel MPI 2019 update 5+.
+```bash
+export FI_PROVIDER=mlx
+export I_MPI_DEBUG=5
+export I_MPI_PIN_DOMAIN=numa
+
+mpirun -n $NPROCS -f $HOSTFILE $MPI_EXECUTABLE
+```
+where:
+
+|Parameter|Description |
+|||
+|`FI_PROVIDER` |Specifies which libfabric provider to use, which will affect the API, protocol, and network used. verbs is another option, but generally mlx gives you better performance.|
+|`I_MPI_DEBUG`|Specifies the level of extra debug output, which can provide details about where processes are pinned, and which protocol and network are used.|
+|`I_MPI_PIN_DOMAIN` |Specifies how you want to pin your processes. For example, you can pin to cores, sockets, or NUMA domains. In this example, you set this environmental variable to numa, which means processes will be pinned to NUMA node domains.|
+
+### Optimizing MPI collectives
+
+There are some other options that you can try, especially if collective operations are consuming a significant amount of time. Intel MPI 2019 update 5+ supports the provide mlx and uses the UCX framework to communicate with InfiniBand. It also supports HCOLL.
+```bash
+export FI_PROVIDER=mlx
+export I_MPI_COLL_EXTERNAL=1
+```
+ ### Non SR-IOV VMs+ For non SR-IOV VMs, an example of downloading the 5.x runtime [free evaluation version](https://registrationcenter.intel.com/en/forms/?productid=1740) is as follows: ```bash wget http://registrationcenter-download.intel.com/akdlm/irc_nas/tec/9278/l_mpi_p_5.1.3.223.tgz
For SUSE Linux Enterprise Server VM image versions - SLES 12 SP3 for HPC, SLES 1
sudo rpm -v -i --nodeps /opt/intelMPI/intel_mpi_packages/*.rpm ```
-## MVAPICH2
-
-Build MVAPICH2.
+## MVAPICH
+The following is an example of building MVAPICH2. Note newer versions may be available than what is used below.
```bash wget http://mvapich.cse.ohio-state.edu/download/mvapich/mv2/mvapich2-2.3.tar.gz tar -xv mvapich2-2.3.tar.gz
cd mvapich2-2.3
make -j 8 && make install ```
-Running MVAPICH2.
-
+An example of running the OSU latency microbenchmark is as follows:
```bash ${INSTALL_PREFIX}/bin/mpirun_rsh -np 2 -hostfile ~/hostfile MV2_CPU_MAPPING=48 ./osu_latency ```
+The following list contains several recommended `mpirun` arguments.
+```bash
+export MV2_CPU_BINDING_POLICY=scatter
+export MV2_CPU_BINDING_LEVEL=numanode
+export MV2_SHOW_CPU_BINDING=1
+export MV2_SHOW_HCA_BINDING=1
+
+mpirun -n $NPROCS -f $HOSTFILE $MPI_EXECUTABLE
+```
+where:
+
+|Parameter|Description |
+|||
+|`MV2_CPU_BINDING_POLICY` |Specifies which binding policy to use, which will affect how processes are pinned to core IDs. In this case, you specify scatter, so processes will be evenly scattered among the NUMA domains.|
+|`MV2_CPU_BINDING_LEVEL`|Specifies where to pin processes. In this case, you set it to numanode, which means processes are pinned to units of NUMA domains.|
+|`MV2_SHOW_CPU_BINDING` |Specifies if you want to get debug information about where the processes are pinned.|
+|`MV2_SHOW_HCA_BINDING` |Specifies if you want to get debug information about which host channel adapter each process is using.|
+ ## Platform MPI Install required packages for Platform MPI Community Edition.
virtual-machines Partner Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/mainframe-rehosting/partner-workloads.md
For more help with mainframe emulation and services, refer to the [Azure Mainfra
## Code conversion -- [Asysco](https://www.asysco.com/azure-cloud/) system conversion technology covering source code, data, batch, scheduling, TP monitors, interfaces, security, management, and more.
+- [Asysco](https://asysco.com/) system conversion technology covering source code, data, batch, scheduling, TP monitors, interfaces, security, management, and more.
- [Asysco AMT Services](https://www.asysco.com/migration-services/) end-to-end services for migration projects, including inventory and analysis, design training, dress rehearsals, go-live, and post-migration support. - [Blu Age](https://www.bluage.com/) tools for digitizing legacy business applications and databases. - [Heirloom Computing](https://www.heirloomcomputing.com/tag/convert-cobol-to-java/) services to convert mainframe COBOL, CICS, and VSAM to Java.
virtual-network Virtual Network Public Ip Address Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-public-ip-address-upgrade.md
A new resource group in Azure Resource Manager is created using the name of the
## Limitations * In order to upgrade a Basic Public IP, it cannot be associated with any Azure resource. Please review [this page](./virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address) for more information on how to disassociate public IPs. Similarly, in order to migrate a Reserved IP, it cannot be associated with any Cloud Service. Please review [this page](./remove-public-ip-address-vm.md) for more information on how to disassociate reserved IPs.
-* Public IPs upgraded from Basic to Standard SKU will continue to have no [availability zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones) and therefore cannot be associated with an Azure resource that is either zone-redundant or zonal. Note this only applies to regions that offer availability zones.
+* Public IPs upgraded from Basic to Standard SKU continue to have no guaranteed [availability zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones). Please ensure this is kept in mind when choosing which resources to associate the IP address with.
* You cannot downgrade from Standard to Basic. ## Next Steps