Updates from: 06/14/2022 01:11:26
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Hr Attribute Retrieval Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-attribute-retrieval-issues.md
# Troubleshoot HR attribute retrieval issues
-## Provisioning app is not fetching all Workday attributes
-**Applies to:**
-* Workday to on-premises Active Directory user provisioning
-* Workday to Azure Active Directory user provisioning
-
-| Troubleshooting | Details |
-|-- | -- |
-| **Issue** | You have just setup the Workday inbound provisioning app and successfully connected to the Workday tenant URL. You ran a test sync and you observed that the provisioning app is not retrieving all attributes from Workday. Only some attributes are read and provisioned to the target. |
-| **Cause** | By default, the Workday provisioning app ships with attribute mapping and XPATH definitions that work with Workday Web Services (WWS) v21.1. When configuring connectivity to Workday in the provisioning app, if you explicitly specified the WWS API version (example: `https://wd3-impl-services1.workday.com/ccx/service/contoso4/Human_Resources/v34.0`), then you may run into this issue, because of the mismatch between WWS API version and the XPATH definitions. |
-| **Resolution** | * *Option 1*: Remove the WWS API version information from the URL and use the default WWS API version v21.1 <br> * *Option 2*: Manually update the XPATH API expressions so it is compatible with your preferred WWS API version. Update the **XPATH API expressions** under **Attribute Mapping -> Advanced Options -> Edit attribute list for Workday** referring to the section [Workday attribute reference](../app-provisioning/workday-attribute-reference.md#xpath-values-for-workday-web-services-wws-api-v30) |
-
-## Provisioning app is not fetching Workday integration system attributes / calculated fields
-**Applies to:**
-* Workday to on-premises Active Directory user provisioning
-* Workday to Azure Active Directory user provisioning
-
-| Troubleshooting | Details |
-|-- | -- |
-| **Issue** | You have just setup the Workday inbound provisioning app and successfully connected to the Workday tenant URL. You have an integration system configured in Workday and you have configured XPATHs that point to attributes in the Workday Integration System. However, the Azure AD provisioning app is not fetching values associated with these integration system attributes or calculated fields. |
-| **Cause** | This is a known limitation. The Workday provisioning app currently does not support fetching calculated fields/integration system attributes. |
-| **Resolution** | There is no workaround for this limitation. |
+## Issue fetching Workday attributes
++
+| **Applies to** |
+|--|
+| * Workday to on-premises Active Directory user provisioning <br> * Workday to Azure Active Directory user provisioning |
+| **Issue Description** |
+| You have just configured the Workday inbound provisioning app and successfully connected to the Workday tenant URL. You ran a test sync and you observed that the provisioning app is not retrieving certain attributes from Workday. Only some attributes are read and provisioned to the target. |
+| **Probable Cause** |
+| By default, the Workday provisioning app ships with attribute mapping and XPATH definitions that work with Workday Web Services (WWS) v21.1. When configuring connectivity to Workday in the provisioning app, if you explicitly specified the WWS API version (example: `https://wd3-impl-services1.workday.com/ccx/service/contoso4/Human_Resources/v34.0`), then you may run into this issue, because of the mismatch between WWS API version and the XPATH definitions. |
+| **Resolution Options** |
+| * *Option 1*: Remove the WWS API version information from the URL and use the default WWS API version v21.1 <br> * *Option 2*: Manually update the XPATH API expressions so it is compatible with your preferred WWS API version. Update the **XPATH API expressions** under **Attribute Mapping -> Advanced Options -> Edit attribute list for Workday** referring to the section [Workday attribute reference](../app-provisioning/workday-attribute-reference.md#xpath-values-for-workday-web-services-wws-api-v30) |
+
+## Issue fetching Workday calculated fields
+
+| **Applies to** |
+|--|
+| * Workday to on-premises Active Directory user provisioning <br> * Workday to Azure Active Directory user provisioning |
+| **Issue Description** |
+| You have just configured the Workday inbound provisioning app and successfully connected to the Workday tenant URL. You have an integration system configured in Workday and you have configured XPATHs that point to attributes in the Workday Integration System. However, the Azure AD provisioning app isn't fetching values associated with these integration system attributes or calculated fields. |
+| **Cause** |
+| This is a known limitation. The Workday provisioning app currently doesn't support fetching calculated fields/integration system attributes using the *Field_And_Parameter_Criteria_Data* Get_Workers request filter. |
+| **Resolution Options** |
+| You could consider a workaround of either using Workday Provisioning groups or Workday Custom ID field. See details below. |
+
+**Suggested workarounds**
+ * **Option 1: Using Workday Provisioning Groups**: Check if the calculated field value can be represented as a provisioning group in Workday. Using the same logic that is used for the calculated field, your Workday Admin may be able to assign a Provisioning Group to the user. Reference Workday doc that requires Workday login: [Set Up Account Provisioning Groups](https://doc.workday.com/reader/3DMnG~27o049IYFWETFtTQ/keT9jI30zCzj4Nu9pJfGeQ). Once configured, this Provisioning Group assignment can be [retrieved in the provisioning job](../app-provisioning/workday-integration-reference.md#example-3-retrieving-provisioning-group-assignments) and used in attribute mappings and scoping filter.
+* **Option 2: Using Workday Custom IDs**: Check if the calculated field value can be represented as a Custom ID on the Worker Profile. Use `Maintain Custom ID Type` task in Workday to define a new type and populate values in this custom ID. Make sure the [Workday ISU account used for the integration](../saas-apps/workday-inbound-tutorial.md#configuring-domain-security-policy-permissions) has domain security permission for `Person Data: ID Information`. For example, you can define "External_Payroll_ID" as a custom ID in Workday and retrieved it using the XPATH: `wd:Worker/wd:Worker_Data/wd:Personal_Data/wd:Identification_Data/wd:Custom_ID/wd:Custom_ID_Data[wd:ID_Type_Reference/wd:ID[@wd:type=\"Custom_ID_Type_ID\"]=\"External_Payroll_ID\"]/wd:ID/text()`
+ ## Next steps
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
# Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory
-Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. To learn more about Application Proxy, see [What is App Proxy?](what-is-application-proxy.md). This tutorial prepares your environment for use with Application Proxy. Once your environment is ready, you'll use the Azure portal to add an on-premises application to your Azure AD tenant.
+Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. To learn more about Application Proxy, see [What is App Proxy?](what-is-application-proxy.md). This tutorial prepares your environment for use with Application Proxy. Once your environment is ready, you'll use the Azure portal to add an on-premises application to your Azure AD tenant.
:::image type="content" source="./media/application-proxy-add-on-premises-application/app-proxy-diagram.png" alt-text="Application Proxy Overview Diagram" lightbox="./media/application-proxy-add-on-premises-application/app-proxy-diagram.png":::
For high availability in your production environment, we recommend having more t
> > ``` > Windows Registry Editor Version 5.00
->
+>
> [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp] > "EnableDefaultHTTP2"=dword:00000000 > ```
To install the connector:
1. Read the Terms of Service. When you're ready, select **Accept terms & Download**. 1. At the bottom of the window, select **Run** to install the connector. An install wizard opens. 1. Follow the instructions in the wizard to install the service. When you're prompted to register the connector with the Application Proxy for your Azure AD tenant, provide your application administrator credentials.
-
+ - For Internet Explorer (IE), if **IE Enhanced Security Configuration** is set to **On**, you may not see the registration screen. To get access, follow the instructions in the error message. Make sure that **Internet Explorer Enhanced Security Configuration** is set to **Off**. ### General remarks
If you choose to have more than one Windows server for your on-premises applicat
If you have installed connectors in different regions, you can optimize traffic by selecting the closest Application Proxy cloud service region to use with each connector group, see [Optimize traffic flow with Azure Active Directory Application Proxy](application-proxy-network-topology.md)
-If your organization uses proxy servers to connect to the internet, you need to configure them for Application Proxy. For more information, see [Work with existing on-premises proxy servers](./application-proxy-configure-connectors-with-proxy-servers.md).
+If your organization uses proxy servers to connect to the internet, you need to configure them for Application Proxy. For more information, see [Work with existing on-premises proxy servers](./application-proxy-configure-connectors-with-proxy-servers.md).
For information about connectors, capacity planning, and how they stay up-to-date, see [Understand Azure AD Application Proxy connectors](application-proxy-connectors.md).
To confirm the connector installed and registered correctly:
## Add an on-premises app to Azure AD
-Now that you've prepared your environment and installed a connector, you're ready to add on-premises applications to Azure AD.
+Now that you've prepared your environment and installed a connector, you're ready to add on-premises applications to Azure AD.
1. Sign in as an administrator in the [Azure portal](https://portal.azure.com/). 2. In the left navigation panel, select **Azure Active Directory**. 3. Select **Enterprise applications**, and then select **New application**.
-4. Select **Add an on-premises application** button which appears about halfway down the page in the **On-premises applications** section. Alternatively, you can select **Create your own application** at the top of the page and then select **Configure Application Proxy for secure remote access to an on-premise application**.
+4. Select **Add an on-premises application** button which appears about halfway down the page in the **On-premises applications** section. Alternatively, you can select **Create your own application** at the top of the page and then select **Configure Application Proxy for secure remote access to an on-premises application**.
5. In the **Add your own on-premises application** section, provide the following information about your application: | Field | Description |
Now that you've prepared your environment and installed a connector, you're read
| **Pre Authentication** | How Application Proxy verifies users before giving them access to your application.<br><br>**Azure Active Directory** - Application Proxy redirects users to sign in with Azure AD, which authenticates their permissions for the directory and application. We recommend keeping this option as the default so that you can take advantage of Azure AD security features like Conditional Access and Multi-Factor Authentication. **Azure Active Directory** is required for monitoring the application with Microsoft Cloud Application Security.<br><br>**Passthrough** - Users don't have to authenticate against Azure AD to access the application. You can still set up authentication requirements on the backend. | | **Connector Group** | Connectors process the remote access to your application, and connector groups help you organize connectors and apps by region, network, or purpose. If you don't have any connector groups created yet, your app is assigned to **Default**.<br><br>If your application uses WebSockets to connect, all connectors in the group must be version 1.5.612.0 or later. |
-6. If necessary, configure **Additional settings**. For most applications, you should keep these settings in their default states.
+6. If necessary, configure **Additional settings**. For most applications, you should keep these settings in their default states.
| Field | Description | | : | :-- |
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
To publish your app through Application Proxy with a custom domain:
![Add CNAME DNS entry](./media/application-proxy-configure-custom-domain/dns-info.png)
-10. Follow the instructions at [Manage DNS records and record sets by using the Azure portal](../../dns/dns-operations-recordsets-portal.md) to add a DNS record that redirects the new external URL to the *msappproxy.net* domain.
+10. Follow the instructions at [Manage DNS records and record sets by using the Azure portal](../../dns/dns-operations-recordsets-portal.md) to add a DNS record that redirects the new external URL to the *msappproxy.net* domain in Azure DNS. If a different DNS provider is used, please contact the vendor for the instructions.
> [!IMPORTANT] > Ensure that you are properly using a CNAME record that points to the *msappproxy.net* domain. Do not point records to IP addresses or server DNS names since these are not static and may impact the resiliency of the service.
active-directory Howto Authentication Passwordless Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-faqs.md
On a Windows Server 2016 or 2019 domain controller, check that the following pat
### Can I deploy the FIDO2 credential provider on an on-premises only device?
-No, this feature isn't supported for on-premise only device. The FIDO2 credential provider wouldn't show up.
+No, this feature isn't supported for on-premises only device. The FIDO2 credential provider wouldn't show up.
### FIDO2 security key sign-in isn't working for my Domain Admin or other high privilege accounts. Why?
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
Yes, Permissions Management has various types of system report available that ca
For information about permissions usage reports, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
-## Does Permissions Management integrate with third-party ITSM (Information Technology Security Management) tools?
+
+## Does Permissions Management integrate with third-party ITSM (Information Technology Service Management) tools?
Permissions Management integrates with ServiceNow.
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
Previously updated : 10/11/2021 Last updated : 06/13/2022
Here is a sample error response:
```json { "error": "invalid_scope",
- "error_description": "AADSTS70011: The provided value for the input parameter 'scope' is not valid. The scope https://example.contoso.com/activity.read is not valid.\r\nTrace ID: 255d1aef-8c98-452f-ac51-23d051240864\r\nCorrelation ID: fb3d2015-bc17-4bb9-bb85-30c5cf1aaaa7\r\nTimestamp: 2016-01-09 02:02:12Z",
+ "error_description": "AADSTS70011: The provided value for the input parameter 'scope' isn't valid. The scope https://example.contoso.com/activity.read isn't valid.\r\nTrace ID: 255d1aef-8c98-452f-ac51-23d051240864\r\nCorrelation ID: fb3d2015-bc17-4bb9-bb85-30c5cf1aaaa7\r\nTimestamp: 2016-01-09 02:02:12Z",
"error_codes": [ 70011 ],
The `error` field has several possible values - review the protocol documentatio
| `invalid_grant` | Some of the authentication material (auth code, refresh token, access token, PKCE challenge) was invalid, unparseable, missing, or otherwise unusable | Try a new request to the `/authorize` endpoint to get a new authorization code. Consider reviewing and validating that app's use of the protocols. | | `unauthorized_client` | The authenticated client isn't authorized to use this authorization grant type. | This usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | | `invalid_client` | Client authentication failed. | The client credentials aren't valid. To fix, the application administrator updates the credentials. |
-| `unsupported_grant_type` | The authorization server does not support the authorization grant type. | Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. |
-| `invalid_resource` | The target resource is invalid because it does not exist, Azure AD can't find it, or it's not correctly configured. | This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. During development, this usually indicates an incorrectly setup test tenant or a typo in the name of the scope being requested. |
+| `unsupported_grant_type` | The authorization server doesn't support the authorization grant type. | Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. |
+| `invalid_resource` | The target resource is invalid because it doesn't exist, Azure AD can't find it, or it's not correctly configured. | This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. During development, this usually indicates an incorrectly setup test tenant or a typo in the name of the scope being requested. |
| `interaction_required` | The request requires user interaction. For example, an additional authentication step is required. | Retry the request with the same resource, interactively, so that the user can complete any challenges required. | | `temporarily_unavailable` | The server is temporarily too busy to handle the request. | Retry the request. The client application might explain to the user that its response is delayed because of a temporary condition. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS20001 | WsFedSignInResponseError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS20012 | WsFedMessageInvalid - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS20033 | FedMetadataInvalidTenantName - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
-| AADSTS28002 | Provided value for the input parameter scope '{scope}' is not valid when requesting an access token. Please specify a valid scope. |
-| AADSTS28003 | Provided value for the input parameter scope cannot be empty when requesting an access token using the provided authorization code. Please specify a valid scope.|
+| AADSTS28002 | Provided value for the input parameter scope '{scope}' isn't valid when requesting an access token. Please specify a valid scope. |
+| AADSTS28003 | Provided value for the input parameter scope can't be empty when requesting an access token using the provided authorization code. Please specify a valid scope.|
| AADSTS40008 | OAuth2IdPUnretryableServerError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS40009 | OAuth2IdPRefreshTokenRedemptionUserError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS40010 | OAuth2IdPRetryableServerError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS40015 | OAuth2IdPAuthCodeRedemptionUserError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS50000 | TokenIssuanceError - There's an issue with the sign-in service. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to resolve this issue. |
-| AADSTS50001 | InvalidResource - The resource is disabled or does not exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you are trying to access. |
+| AADSTS50001 | InvalidResource - The resource is disabled or doesn't exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you're trying to access. |
| AADSTS50002 | NotAllowedTenant - Sign-in failed because of a restricted proxy access on the tenant. If it's your own tenant policy, you can change your restricted tenant settings to fix this issue. |
-| AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that is not in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
+| AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that isn't in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
| AADSTS50003 | MissingSigningKey - Sign-in failed because of a missing signing key or certificate. This might be because there was no signing key configured in the app. To learn more, see the troubleshooting article for error [AADSTS50003](/troubleshoot/azure/active-directory/error-code-aadsts50003-cert-or-key-not-configured). If you still see issues, contact the app owner or an app admin. | | AADSTS50005 | DevicePolicyError - User tried to log in to a device from a platform that's currently not supported through Conditional Access policy. | | AADSTS50006 | InvalidSignature - Signature verification failed because of an invalid signature. | | AADSTS50007 | PartnerEncryptionCertificateMissing - The partner encryption certificate was not found for this app. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Microsoft to get this fixed. | | AADSTS50008 | InvalidSamlToken - SAML assertion is missing or misconfigured in the token. Contact your federation provider. | | AADSTS50010 | AudienceUriValidationFailed - Audience URI validation for the app failed since no token audiences were configured. |
-| AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or does not match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you. To learn more, see the troubleshooting article for error [AADSTS50011](/troubleshoot/azure/active-directory/error-code-aadsts50011-reply-url-mismatch).|
-| AADSTS50012 | AuthenticationFailed - Authentication failed for one of the following reasons:<ul><li>The subject name of the signing certificate is not authorized</li><li>A matching trusted authority policy was not found for the authorized subject name</li><li>The certificate chain is not valid</li><li>The signing certificate is not valid</li><li>Policy is not configured on the tenant</li><li>Thumbprint of the signing certificate is not authorized</li><li>Client assertion contains an invalid signature</li></ul> |
-| AADSTS50013 | InvalidAssertion - Assertion is invalid because of various reasons - The token issuer doesn't match the api version within its valid time range -expired -malformed - Refresh token in the assertion is not a primary refresh token. |
-| AADSTS50014 | GuestUserInPendingState - The user's redemption is in a pending state. The guest user account is not fully created yet. |
+| AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or doesn't match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you. To learn more, see the troubleshooting article for error [AADSTS50011](/troubleshoot/azure/active-directory/error-code-aadsts50011-reply-url-mismatch).|
+| AADSTS50012 | AuthenticationFailed - Authentication failed for one of the following reasons:<ul><li>The subject name of the signing certificate isn't authorized</li><li>A matching trusted authority policy was not found for the authorized subject name</li><li>The certificate chain isn't valid</li><li>The signing certificate isn't valid</li><li>Policy isn't configured on the tenant</li><li>Thumbprint of the signing certificate isn't authorized</li><li>Client assertion contains an invalid signature</li></ul> |
+| AADSTS50013 | InvalidAssertion - Assertion is invalid because of various reasons - The token issuer doesn't match the api version within its valid time range -expired -malformed - Refresh token in the assertion isn't a primary refresh token. |
+| AADSTS50014 | GuestUserInPendingState - The user's redemption is in a pending state. The guest user account isn't fully created yet. |
| AADSTS50015 | ViralUserLegalAgeConsentRequiredState - The user requires legal age group consent. | | AADSTS50017 | CertificateValidationFailed - Certification validation failed, reasons for the following reasons:<ul><li>Cannot find issuing certificate in trusted certificates list</li><li>Unable to find expected CrlSegment</li><li>Cannot find issuing certificate in trusted certificates list</li><li>Delta CRL distribution point is configured without a corresponding CRL distribution point</li><li>Unable to retrieve valid CRL segments because of a timeout issue</li><li>Unable to download CRL</li></ul>Contact the tenant admin. | | AADSTS50020 | UserUnauthorized - Users are unauthorized to call this endpoint. |
-| AADSTS500212 | NotAllowedByOutboundPolicyTenant - The user's administrator has set an outbound access policy that does not allow access to the resource tenant. |
-| AADSTS500213 | NotAllowedByInboundPolicyTenant - The resource tenant's cross-tenant access policy does not allow this user to access this tenant. |
-| AADSTS50027 | InvalidJwtToken - Invalid JWT token because of the following reasons:<ul><li>doesn't contain nonce claim, sub claim</li><li>subject identifier mismatch</li><li>duplicate claim in idToken claims</li><li>unexpected issuer</li><li>unexpected audience</li><li>not within its valid time range </li><li>token format is not proper</li><li>External ID token from issuer failed signature verification.</li></ul> |
+| AADSTS500212 | NotAllowedByOutboundPolicyTenant - The user's administrator has set an outbound access policy that doesn't allow access to the resource tenant. |
+| AADSTS500213 | NotAllowedByInboundPolicyTenant - The resource tenant's cross-tenant access policy doesn't allow this user to access this tenant. |
+| AADSTS50027 | InvalidJwtToken - Invalid JWT token because of the following reasons:<ul><li>doesn't contain nonce claim, sub claim</li><li>subject identifier mismatch</li><li>duplicate claim in idToken claims</li><li>unexpected issuer</li><li>unexpected audience</li><li>not within its valid time range </li><li>token format isn't proper</li><li>External ID token from issuer failed signature verification.</li></ul> |
| AADSTS50029 | Invalid URI - domain name contains invalid characters. Contact the tenant admin. | | AADSTS50032 | WeakRsaKey - Indicates the erroneous user attempt to use a weak RSA key. | | AADSTS50033 | RetryableError - Indicates a transient error not related to the database operations. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50050 | MalformedDiscoveryRequest - The request is malformed. | | AADSTS50053 | This error can result from two different reasons: <br><ul><li>IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. The user is blocked due to repeated sign-in attempts. See [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md).</li><li>Or, sign-in was blocked because it came from an IP address with malicious activity.</li></ul> <br>To determine which failure reason caused this error, sign in to the [Azure portal](https://portal.azure.com). Navigate to your Azure AD tenant and then **Monitoring** -> **Sign-ins**. Find the failed user sign-in with **Sign-in error code** 50053 and check the **Failure reason**.| | AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. The user's password is expired, and therefore their login or session was ended. They will be offered the opportunity to reset it, or may ask an admin to reset it via [Reset a user's password using Azure Active Directory](../fundamentals/active-directory-users-reset-password-azure-portal.md). |
-| AADSTS50056 | Invalid or null password: password does not exist in the directory for this user. The user should be asked to enter their password again. |
+| AADSTS50056 | Invalid or null password: password doesn't exist in the directory for this user. The user should be asked to enter their password again. |
| AADSTS50057 | UserDisabled - The user account is disabled. The user object in Active Directory backing this account has been disabled. An admin can re-enable this account [through PowerShell](/powershell/module/activedirectory/enable-adaccount) |
-| AADSTS50058 | UserInformationNotProvided - Session information is not sufficient for single-sign-on. This means that a user is not signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
+| AADSTS50058 | UserInformationNotProvided - Session information isn't sufficient for single-sign-on. This means that a user isn't signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
| AADSTS50059 | MissingTenantRealmAndNoUserInformationProvided - Tenant-identifying information was not found in either the request or implied by any provided credentials. The user can contact the tenant admin to help resolve the issue. | | AADSTS50061 | SignoutInvalidRequest - Unable to complete signout. The request was invalid. | | AADSTS50064 | CredentialAuthenticationError - Credential validation on username or password has failed. |
-| AADSTS50068 | SignoutInitiatorNotParticipant - Sign out has failed. The app that initiated sign out is not a participant in the current session. |
+| AADSTS50068 | SignoutInitiatorNotParticipant - Sign out has failed. The app that initiated sign out isn't a participant in the current session. |
| AADSTS50070 | SignoutUnknownSessionIdentifier - Sign out has failed. The sign out request specified a name identifier that didn't match the existing session(s). | | AADSTS50071 | SignoutMessageExpired - The logout request has expired. | | AADSTS50072 | UserStrongAuthEnrollmentRequiredInterrupt - User needs to enroll for second factor authentication (interactive). |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50089 | Authentication failed due to flow token expired. Expected - auth codes, refresh tokens, and sessions expire over time or are revoked by the user or an admin. The app will request a new login from the user. | | AADSTS50097 | DeviceAuthenticationRequired - Device authentication is required. | | AADSTS50099 | PKeyAuthInvalidJwtUnauthorized - The JWT signature is invalid. |
-| AADSTS50105 | EntitlementGrantsNotFound - The signed in user is not assigned to a role for the signed in app. Assign the user to the app. To learn more, see the troubleshooting article for error [AADSTS50105](/troubleshoot/azure/active-directory/error-code-aadsts50105-user-not-assigned-role). |
-| AADSTS50107 | InvalidRealmUri - The requested federation realm object does not exist. Contact the tenant admin. |
+| AADSTS50105 | EntitlementGrantsNotFound - The signed in user isn't assigned to a role for the signed in app. Assign the user to the app. To learn more, see the troubleshooting article for error [AADSTS50105](/troubleshoot/azure/active-directory/error-code-aadsts50105-user-not-assigned-role). |
+| AADSTS50107 | InvalidRealmUri - The requested federation realm object doesn't exist. Contact the tenant admin. |
| AADSTS50120 | ThresholdJwtInvalidJwtFormat - Issue with JWT header. Contact the tenant admin. | | AADSTS50124 | ClaimsTransformationInvalidInputParameter - Claims Transformation contains invalid input parameter. Contact the tenant admin to update the policy. | | AADSTS501241 | Mandatory Input '{paramName}' missing from transformation id '{transformId}'. This error is returned while Azure AD is trying to build a SAML response to the application. NameID claim or NameIdentifier is mandatory in SAML response and if Azure AD failed to get source attribute for NameID claim, it will return this error. As a resolution, ensure you add claim rules in Azure Portal > Azure Active Directory > Enterprise Applications > Select your application > Single Sign-On > User Attributes & Claims > Unique User Identifier (Name ID). |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50128 | Invalid domain name - No tenant-identifying information found in either the request or implied by any provided credentials. | | AADSTS50129 | DeviceIsNotWorkplaceJoined - Workplace join is required to register the device. | | AADSTS50131 | ConditionalAccessFailed - Indicates various Conditional Access errors such as bad Windows device state, request blocked due to suspicious activity, access policy, or security policy decisions. |
-| AADSTS50132 | SsoArtifactInvalidOrExpired - The session is not valid due to password expiration or recent password change. |
-| AADSTS50133 | SsoArtifactRevoked - The session is not valid due to password expiration or recent password change. |
+| AADSTS50132 | SsoArtifactInvalidOrExpired - The session isn't valid due to password expiration or recent password change. |
+| AADSTS50133 | SsoArtifactRevoked - The session isn't valid due to password expiration or recent password change. |
| AADSTS50134 | DeviceFlowAuthorizeWrongDatacenter - Wrong data center. To authorize a request that was initiated by an app in the OAuth 2.0 device flow, the authorizing party must be in the same data center where the original request resides. | | AADSTS50135 | PasswordChangeCompromisedPassword - Password change is required due to account risk. | | AADSTS50136 | RedirectMsaSessionToApp - Single MSA session detected. | | AADSTS50139 | SessionMissingMsaOAuth2RefreshToken - The session is invalid due to a missing external refresh token. | | AADSTS50140 | KmsiInterrupt - This error occurred due to "Keep me signed in" interrupt when the user was signing-in. This is an expected part of the login flow, where a user is asked if they want to remain signed into their current browser to make further logins easier. For more information, see [The new Azure AD sign-in and ΓÇ£Keep me signed inΓÇ¥ experiences rolling out now!](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/the-new-azure-ad-sign-in-and-keep-me-signed-in-experiences/m-p/128267). You can [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details.|
-| AADSTS50143 | Session mismatch - Session is invalid because user tenant does not match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. |
+| AADSTS50143 | Session mismatch - Session is invalid because user tenant doesn't match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. |
| AADSTS50144 | InvalidPasswordExpiredOnPremPassword - User's Active Directory password has expired. Generate a new password for the user or have the user use the self-service reset tool to reset their password. |
-| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or is not yet valid. |
-| AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter is not valid. |
-| AADSTS501481 | The Code_Verifier does not match the code_challenge supplied in the authorization request.|
+| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. |
+| AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter isn't valid. |
+| AADSTS501481 | The Code_Verifier doesn't match the code_challenge supplied in the authorization request.|
| AADSTS50155 | DeviceAuthenticationFailed - Device authentication failed for this user. | | AADSTS50158 | ExternalSecurityChallenge - External security challenge was not satisfied. |
-| AADSTS50161 | InvalidExternalSecurityChallengeConfiguration - Claims sent by external provider is not enough or Missing claim requested to external provider. |
+| AADSTS50161 | InvalidExternalSecurityChallengeConfiguration - Claims sent by external provider isn't enough or Missing claim requested to external provider. |
| AADSTS50166 | ExternalClaimsProviderThrottled - Failed to send the request to the claims provider. | | AADSTS50168 | ChromeBrowserSsoInterruptRequired - The client is capable of obtaining an SSO token through the Windows 10 Accounts extension, but the token was not found in the request or the supplied token was expired. |
-| AADSTS50169 | InvalidRequestBadRealm - The realm is not a configured realm of the current service namespace. |
+| AADSTS50169 | InvalidRequestBadRealm - The realm isn't a configured realm of the current service namespace. |
| AADSTS50170 | MissingExternalClaimsProviderMapping - The external controls mapping is missing. | | AADSTS50173 | FreshTokenNeeded - The provided grant has expired due to it being revoked, and a fresh auth token is needed. Either an admin or a user revoked the tokens for this user, causing subsequent token refreshes to fail and require reauthentication. Have the user sign in again. |
-| AADSTS50177 | ExternalChallengeNotSupportedForPassthroughUsers - External challenge is not supported for passthrough users. |
-| AADSTS50178 | SessionControlNotSupportedForPassthroughUsers - Session control is not supported for passthrough users. |
+| AADSTS50177 | ExternalChallengeNotSupportedForPassthroughUsers - External challenge isn't supported for passthrough users. |
+| AADSTS50178 | SessionControlNotSupportedForPassthroughUsers - Session control isn't supported for passthrough users. |
| AADSTS50180 | WindowsIntegratedAuthMissing - Integrated Windows authentication is needed. Enable the tenant for Seamless SSO. | | AADSTS50187 | DeviceInformationNotProvided - The service failed to perform device authentication. |
-| AADSTS50194 | Application '{appId}'({appName}) is not configured as a multi-tenant application. Usage of the /common endpoint is not supported for such applications created after '{time}'. Use a tenant-specific endpoint or configure the application to be multi-tenant. |
+| AADSTS50194 | Application '{appId}'({appName}) is n't configured as a multi-tenant application. Usage of the /common endpoint isn't supported for such applications created after '{time}'. Use a tenant-specific endpoint or configure the application to be multi-tenant. |
| AADSTS50196 | LoopDetected - A client loop has been detected. Check the appΓÇÖs logic to ensure that token caching is implemented, and that error conditions are handled correctly. The app has made too many of the same request in too short a period, indicating that it is in a faulty state or is abusively requesting tokens. | | AADSTS50197 | ConflictingIdentities - The user could not be found. Try signing in again. | | AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Because this is an "interaction_required" error, the client should do interactive auth. This occurs because a system webview has been used to request a token for a native application - the user must be prompted to ask if this was actually the app they meant to sign into. To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />msauth://(iOS only)<br />msauthv2://(iOS only)<br />chrome-extension:// (desktop Chrome browser only) | | AADSTS51000 | RequiredFeatureNotEnabled - The feature is disabled. | | AADSTS51001 | DomainHintMustbePresent - Domain hint must be present with on-premises security identifier or on-premises UPN. |
+| AADSTS1000104| XCB2BResourceCloudNotAllowedOnIdentityTenant - Resource cloud {resourceCloud} isn't allowed on identity tenant {identityTenant}. {resourceCloud} - cloud instance which owns the resource. {identityTenant} - is the tenant where signing-in identity is originated from. |
| AADSTS51004 | UserAccountNotInDirectory - The user account doesnΓÇÖt exist in the directory. | | AADSTS51005 | TemporaryRedirect - Equivalent to HTTP status 307, which indicates that the requested information is located at the URI specified in the location header. When you receive this status, follow the location header associated with the response. When the original request method was POST, the redirected request will also use the POST method. | | AADSTS51006 | ForceReauthDueToInsufficientAuth - Integrated Windows authentication is needed. User logged in using a session token that is missing the integrated Windows authentication claim. Request the user to log in again. | | AADSTS52004 | DelegationDoesNotExistForLinkedIn - The user has not provided consent for access to LinkedIn resources. |
-| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device is not compliant. The user must enroll their device with an approved MDM provider like Intune. |
-| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device is not domain joined. Have the user use a domain joined device. |
-| AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used is not an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
+| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. |
+| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device isn't domain joined. Have the user use a domain joined device. |
+| AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used isn't an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. | | AADSTS53004 | ProofUpBlockedDueToRisk - User needs to complete the multi-factor authentication registration process before accessing this content. User should register for multi-factor authentication. |
+| AADSTS53010 | ProofUpBlockedDueToSecurityInfoAcr - Cannot configure multi-factor authentication methods because the organization requires this information to be set from specific locations or devices. |
| AADSTS53011 | User blocked due to risk on home tenant. | | AADSTS54000 | MinorUserBlockedLegalAgeGroupRule | | AADSTS54005 | OAuth2 Authorization code was already redeemed, please retry with a new valid code or use an existing refresh token. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS65004 | UserDeclinedConsent - User declined to consent to access the app. Have the user retry the sign-in and consent to the app| | AADSTS65005 | MisconfiguredApplication - The app required resource access list does not contain apps discoverable by the resource or The client app has requested access to resource, which was not specified in its required resource access list or Graph service returned bad request or resource not found. If the app supports SAML, you may have configured the app with the wrong Identifier (Entity). To learn more, see the troubleshooting article for error [AADSTS650056](/troubleshoot/azure/active-directory/error-code-aadsts650056-misconfigured-app). | | AADSTS650052 | The app needs access to a service `(\"{name}\")` that your organization `\"{organization}\"` has not subscribed to or enabled. Contact your IT Admin to review the configuration of your service subscriptions. |
-| AADSTS650054 | The application asked for permissions to access a resource that has been removed or is no longer available. Make sure that all resources the app is calling are present in the tenant you are operating in. |
+| AADSTS650054 | The application asked for permissions to access a resource that has been removed or is no longer available. Make sure that all resources the app is calling are present in the tenant you're operating in. |
| AADSTS650056 | Misconfigured application. This could be due to one of the following: the client has not listed any permissions for '{name}' in the requested permissions in the client's application registration. Or, the admin has not consented in the tenant. Or, check the application identifier in the request to ensure it matches the configured client application identifier. Or, check the certificate in the request to ensure it's valid. Please contact your admin to fix the configuration or consent on behalf of the tenant. Client app ID: {id}. Please contact your admin to fix the configuration or consent on behalf of the tenant.|
-| AADSTS650057 | Invalid resource. The client has requested access to a resource which is not listed in the requested permissions in the client's application registration. Client app ID: {appId}({appName}). Resource value from request: {resource}. Resource app ID: {resourceAppId}. List of valid resources from app registration: {regList}. |
+| AADSTS650057 | Invalid resource. The client has requested access to a resource which isn't listed in the requested permissions in the client's application registration. Client app ID: {appId}({appName}). Resource value from request: {resource}. Resource app ID: {resourceAppId}. List of valid resources from app registration: {regList}. |
| AADSTS67003 | ActorNotValidServiceIdentity |
-| AADSTS70000 | InvalidGrant - Authentication failed. The refresh token is not valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> |
+| AADSTS70000 | InvalidGrant - Authentication failed. The refresh token isn't valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> |
| AADSTS70001 | UnauthorizedClient - The application is disabled. To learn more, see the troubleshooting article for error [AADSTS70001](/troubleshoot/azure/active-directory/error-code-aadsts70001-app-not-found-in-directory). | | AADSTS70002 | InvalidClient - Error validating the credentials. The specified client_secret does not match the expected value for this client. Correct the client_secret and try again. For more info, see [Use the authorization code to request an access token](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). | | AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. |
-| AADSTS700030 | Invalid certificate - subject name in certificate is not authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. |
+| AADSTS700030 | Invalid certificate - subject name in certificate isn't authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. |
| AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. |
-| AADSTS70005 | UnsupportedResponseType - The app returned an unsupported response type due to the following reasons:<ul><li>response type 'token' is not enabled for the app</li><li>response type 'id_token' requires the 'OpenID' scope -contains an unsupported OAuth parameter value in the encoded wctx</li></ul> |
-| AADSTS700054 | Response_type 'id_token' is not enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure Portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.|
+| AADSTS70005 | UnsupportedResponseType - The app returned an unsupported response type due to the following reasons:<ul><li>response type 'token' isn't enabled for the app</li><li>response type 'id_token' requires the 'OpenID' scope -contains an unsupported OAuth parameter value in the encoded wctx</li></ul> |
+| AADSTS700054 | Response_type 'id_token' isn't enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure Portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.|
| AADSTS70007 | UnsupportedResponseMode - The app returned an unsupported value of `response_mode` when requesting a token. | | AADSTS70008 | ExpiredOrRevokedGrant - The refresh token has expired due to inactivity. The token was issued on XXX and was inactive for a certain amount of time. |
-| AADSTS700084 | The refresh token was issued to a single page app (SPA), and therefore has a fixed, limited lifetime of {time}, which cannot be extended. It is now expired and a new sign in request must be sent by the SPA to the sign in page. The token was issued on {issueDate}.|
+| AADSTS700084 | The refresh token was issued to a single page app (SPA), and therefore has a fixed, limited lifetime of {time}, which can't be extended. It is now expired and a new sign in request must be sent by the SPA to the sign in page. The token was issued on {issueDate}.|
| AADSTS70011 | InvalidScope - The scope requested by the app is invalid. | | AADSTS70012 | MsaServerError - A server error occurred while authenticating an MSA (consumer) user. Try again. If it continues to fail, [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) | | AADSTS70016 | AuthorizationPending - OAuth 2.0 device flow error. Authorization is pending. The device will retry polling the request. |
-| AADSTS70018 | BadVerificationCode - Invalid verification code due to User typing in wrong user code for device code flow. Authorization is not approved. |
+| AADSTS70018 | BadVerificationCode - Invalid verification code due to User typing in wrong user code for device code flow. Authorization isn't approved. |
| AADSTS70019 | CodeExpired - Verification code expired. Have the user retry the sign-in. | | AADSTS70043 | The refresh token has expired or is invalid due to sign-in frequency checks by conditional access. The token was issued on {issueDate} and the maximum allowed lifetime for this request is {time}. | | AADSTS75001 | BindingSerializationError - An error occurred during SAML message binding. |
-| AADSTS75003 | UnsupportedBindingError - The app returned an error related to unsupported binding (SAML protocol response cannot be sent via bindings other than HTTP POST). |
+| AADSTS75003 | UnsupportedBindingError - The app returned an error related to unsupported binding (SAML protocol response can't be sent via bindings other than HTTP POST). |
| AADSTS75005 | Saml2MessageInvalid - Azure AD doesnΓÇÖt support the SAML request sent by the app for SSO. To learn more, see the troubleshooting article for error [AADSTS75005](/troubleshoot/azure/active-directory/error-code-aadsts75005-not-a-valid-saml-request). | | AADSTS7500514 | A supported type of SAML response was not found. The supported response types are 'Response' (in XML namespace 'urn:oasis:names:tc:SAML:2.0:protocol') or 'Assertion' (in XML namespace 'urn:oasis:names:tc:SAML:2.0:assertion'). Application error - the developer will handle this error.| | AADSTS750054 | SAMLRequest or SAMLResponse must be present as query string parameters in HTTP request for SAML Redirect binding. To learn more, see the troubleshooting article for error [AADSTS750054](/troubleshoot/azure/active-directory/error-code-aadsts750054-saml-request-not-present). |
The `error` field has several possible values - review the protocol documentatio
| AADSTS80012 | OnPremisePasswordValidationAccountLogonInvalidHours - The users attempted to log on outside of the allowed hours (this is specified in AD). | | AADSTS80013 | OnPremisePasswordValidationTimeSkew - The authentication attempt could not be completed due to time skew between the machine running the authentication agent and AD. Fix time sync issues. | | AADSTS81004 | DesktopSsoIdentityInTicketIsNotAuthenticated - Kerberos authentication attempt failed. |
-| AADSTS81005 | DesktopSsoAuthenticationPackageNotSupported - The authentication package is not supported. |
+| AADSTS81005 | DesktopSsoAuthenticationPackageNotSupported - The authentication package isn't supported. |
| AADSTS81006 | DesktopSsoNoAuthorizationHeader - No authorization header was found. |
-| AADSTS81007 | DesktopSsoTenantIsNotOptIn - The tenant is not enabled for Seamless SSO. |
+| AADSTS81007 | DesktopSsoTenantIsNotOptIn - The tenant isn't enabled for Seamless SSO. |
| AADSTS81009 | DesktopSsoAuthorizationHeaderValueWithBadFormat - Unable to validate user's Kerberos ticket. | | AADSTS81010 | DesktopSsoAuthTokenInvalid - Seamless SSO failed because the user's Kerberos ticket has expired or is invalid. | | AADSTS81011 | DesktopSsoLookupUserBySidFailed - Unable to find user object based on information in the user's Kerberos ticket. | | AADSTS81012 | DesktopSsoMismatchBetweenTokenUpnAndChosenUpn - The user trying to sign in to Azure AD is different from the user signed into the device. | | AADSTS90002 | InvalidTenantName - The tenant name wasn't found in the data store. Check to make sure you have the correct tenant ID. |
-| AADSTS90004 | InvalidRequestFormat - The request is not properly formatted. |
-| AADSTS90005 | InvalidRequestWithMultipleRequirements - Unable to complete the request. The request is not valid because the identifier and login hint can't be used together. |
+| AADSTS90004 | InvalidRequestFormat - The request isn't properly formatted. |
+| AADSTS90005 | InvalidRequestWithMultipleRequirements - Unable to complete the request. The request isn't valid because the identifier and login hint can't be used together. |
| AADSTS90006 | ExternalServerRetryableError - The service is temporarily unavailable.| | AADSTS90007 | InvalidSessionId - Bad request. The passed session ID can't be parsed. | | AADSTS90008 | TokenForItselfRequiresGraphPermission - The user or administrator hasn't consented to use the application. At the minimum, the application requires access to Azure AD by specifying the sign-in and read user profile permission. | | AADSTS90009 | TokenForItselfMissingIdenticalAppIdentifier - The application is requesting a token for itself. This scenario is supported only if the resource that's specified is using the GUID-based application ID. | | AADSTS90010 | NotSupported - Unable to create the algorithm. |
-| AADSTS9001023 |The grant type is not supported over the /common or /consumers endpoints. Please use the /organizations or tenant-specific endpoint.|
+| AADSTS9001023 |The grant type isn't supported over the /common or /consumers endpoints. Please use the /organizations or tenant-specific endpoint.|
| AADSTS90012 | RequestTimeout - The requested has timed out. |
-| AADSTS90013 | InvalidUserInput - The input from the user is not valid. |
-| AADSTS90014 | MissingRequiredField - This error code may appear in various cases when an expected field is not present in the credential. |
+| AADSTS90013 | InvalidUserInput - The input from the user isn't valid. |
+| AADSTS90014 | MissingRequiredField - This error code may appear in various cases when an expected field isn't present in the credential. |
| AADSTS900144 | The request body must contain the following parameter: '{name}'. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.| | AADSTS90015 | QueryStringTooLong - The query string is too long. | | AADSTS90016 | MissingRequiredClaim - The access token isn't valid. The required claim is missing. | | AADSTS90019 | MissingTenantRealm - Azure AD was unable to determine the tenant identifier from the request. | | AADSTS90020 | The SAML 1.1 Assertion is missing ImmutableID of the user. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.|
-| AADSTS90022 | AuthenticatedInvalidPrincipalNameFormat - The principal name format is not valid, or does not meet the expected `name[/host][@realm]` format. The principal name is required, host and realm are optional and may be set to null. |
-| AADSTS90023 | InvalidRequest - The authentication service request is not valid. |
+| AADSTS90022 | AuthenticatedInvalidPrincipalNameFormat - The principal name format isn't valid, or doesn't meet the expected `name[/host][@realm]` format. The principal name is required, host and realm are optional and may be set to null. |
+| AADSTS90023 | InvalidRequest - The authentication service request isn't valid. |
| AADSTS9002313 | InvalidRequest - Request is malformed or invalid. - The issue here is because there was something wrong with the request to a certain endpoint. The suggestion to this issue is to get a fiddler trace of the error occurring and looking to see if the request is actually properly formatted or not. | | AADSTS9002332 | Application '{principalId}'({principalName}) is configured for use by Azure Active Directory users only. Please do not use the /consumers endpoint to serve this request. | | AADSTS90024 | RequestBudgetExceededError - A transient error has occurred. Try again. | | AADSTS90027 | We are unable to issue tokens from this API version on the MSA tenant. Please contact the application vendor as they need to use version 2.0 of the protocol to support this.|
-| AADSTS90033 | MsodsServiceUnavailable - The Microsoft Online Directory Service (MSODS) is not available. |
+| AADSTS90033 | MsodsServiceUnavailable - The Microsoft Online Directory Service (MSODS) isn't available. |
| AADSTS90036 | MsodsServiceUnretryableFailure - An unexpected, non-retryable error from the WCF service hosted by MSODS has occurred. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to get more details on the error. | | AADSTS90038 | NationalCloudTenantRedirection - The specified tenant 'Y' belongs to the National Cloud 'X'. Current cloud instance 'Z' does not federate with X. A cloud redirect error is returned. | | AADSTS90043 | NationalCloudAuthCodeRedirection - The feature is disabled. |
-| AADSTS900432 | Confidential Client is not supported in Cross Cloud request.|
+| AADSTS900432 | Confidential Client isn't supported in Cross Cloud request.|
| AADSTS90051 | InvalidNationalCloudId - The national cloud identifier contains an invalid cloud identifier. | | AADSTS90055 | TenantThrottlingError - There are too many incoming requests. This exception is thrown for blocked tenants. | | AADSTS90056 | BadResourceRequest - To redeem the code for an access token, the app should send a POST request to the `/token` endpoint. Also, prior to this, you should provide an authorization code and send it in the POST request to the `/token` endpoint. Refer to this article for an overview of OAuth 2.0 authorization code flow: [../azuread-dev/v1-protocols-oauth-code.md](../azuread-dev/v1-protocols-oauth-code.md). Direct the user to the `/authorize` endpoint, which will return an authorization_code. By posting a request to the `/token` endpoint, the user gets the access token. Log in the Azure portal, and check **App registrations > Endpoints** to confirm that the two endpoints were configured correctly. | | AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. |
-| AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message is not valid. |
+| AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message isn't valid. |
| AADSTS90082 | OrgIdWsFederationNotSupported - The selected authentication policy for the request isn't currently supported. | | AADSTS90084 | OrgIdWsFederationGuestNotAllowed - Guest accounts aren't allowed for this site. | | AADSTS90085 | OrgIdWsFederationSltRedemptionFailed - The service is unable to issue a token because the company object hasn't been provisioned yet. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS90092 | GraphNonRetryableError | | AADSTS90093 | GraphUserUnauthorized - Graph returned with a forbidden error code for the request. | | AADSTS90094 | AdminConsentRequired - Administrator consent is required. |
-| AADSTS900382 | Confidential Client is not supported in Cross Cloud request. |
+| AADSTS900382 | Confidential Client isn't supported in Cross Cloud request. |
| AADSTS90095 | AdminConsentRequiredRequestAccess- In the Admin Consent Workflow experience, an interrupt that appears when the user is told they need to ask the admin for consent. | | AADSTS90099 | The application '{appId}' ({appName}) has not been authorized in the tenant '{tenant}'. Applications must be authorized to access the customer tenant before partner delegated administrators can use them. Provide pre-consent or execute the appropriate Partner Center API to authorize the application. | | AADSTS900971| No reply address provided.| | AADSTS90100 | InvalidRequestParameter - The parameter is empty or not valid. |
-| AADSTS901002 | AADSTS901002: The 'resource' request parameter is not supported. |
+| AADSTS901002 | AADSTS901002: The 'resource' request parameter isn't supported. |
| AADSTS90101 | InvalidEmailAddress - The supplied data isn't a valid email address. The email address must be in the format `someone@example.com`. | | AADSTS90102 | InvalidUriParameter - The value must be a valid absolute URI. |
-| AADSTS90107 | InvalidXml - The request is not valid. Make sure your data doesn't have invalid characters.|
+| AADSTS90107 | InvalidXml - The request isn't valid. Make sure your data doesn't have invalid characters.|
| AADSTS90114 | InvalidExpiryDate - The bulk token expiration timestamp will cause an expired token to be issued. | | AADSTS90117 | InvalidRequestInput | | AADSTS90119 | InvalidUserCode - The user code is null or empty.| | AADSTS90120 | InvalidDeviceFlowRequest - The request was already authorized or declined. | | AADSTS90121 | InvalidEmptyRequest - Invalid empty request.| | AADSTS90123 | IdentityProviderAccessDenied - The token can't be issued because the identity or claim issuance provider denied the request. |
-| AADSTS90124 | V1ResourceV2GlobalEndpointNotSupported - The resource is not supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
+| AADSTS90124 | V1ResourceV2GlobalEndpointNotSupported - The resource isn't supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
| AADSTS90125 | DebugModeEnrollTenantNotFound - The user isn't in the system. Make sure you entered the user name correctly. |
-| AADSTS90126 | DebugModeEnrollTenantNotInferred - The user type is not supported on this endpoint. The system can't infer the user's tenant from the user name. |
-| AADSTS90130 | NonConvergedAppV2GlobalEndpointNotSupported - The application is not supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
+| AADSTS90126 | DebugModeEnrollTenantNotInferred - The user type isn't supported on this endpoint. The system can't infer the user's tenant from the user name. |
+| AADSTS90130 | NonConvergedAppV2GlobalEndpointNotSupported - The application isn't supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
| AADSTS120000 | PasswordChangeIncorrectCurrentPassword | | AADSTS120002 | PasswordChangeInvalidNewPasswordWeak | | AADSTS120003 | PasswordChangeInvalidNewPasswordContainsMemberName |
The `error` field has several possible values - review the protocol documentatio
| AADSTS130008 | NgcDeviceIsNotFound - The device referenced by the NGC key wasn't found. | | AADSTS135010 | KeyNotFound | | AADSTS135011 | Device used during the authentication is disabled.|
-| AADSTS140000 | InvalidRequestNonce - Request nonce is not provided. |
-| AADSTS140001 | InvalidSessionKey - The session key is not valid.|
+| AADSTS140000 | InvalidRequestNonce - Request nonce isn't provided. |
+| AADSTS140001 | InvalidSessionKey - The session key isn't valid.|
| AADSTS165004 | Actual message content is runtime specific. Please see returned exception message for details. | | AADSTS165900 | InvalidApiRequest - Invalid request. |
-| AADSTS220450 | UnsupportedAndroidWebViewVersion - The Chrome WebView version is not supported. |
+| AADSTS220450 | UnsupportedAndroidWebViewVersion - The Chrome WebView version isn't supported. |
| AADSTS220501 | InvalidCrlDownload |
-| AADSTS221000 | DeviceOnlyTokensNotSupportedByResource - The resource is not configured to accept device-only tokens. |
+| AADSTS221000 | DeviceOnlyTokensNotSupportedByResource - The resource isn't configured to accept device-only tokens. |
| AADSTS240001 | BulkAADJTokenUnauthorized - The user isn't authorized to register devices in Azure AD. | | AADSTS240002 | RequiredClaimIsMissing - The id_token can't be used as `urn:ietf:params:oauth:grant-type:jwt-bearer` grant.| | AADSTS530032 | BlockedByConditionalAccessOnSecurityPolicy - The tenant admin has configured a security policy that blocks this request. Check the security policies that are defined on the tenant level to determine if your request meets the policy requirements. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS1000000 | UserNotBoundError - The Bind API requires the Azure AD user to also authenticate with an external IDP, which hasn't happened yet. | | AADSTS1000002 | BindCompleteInterruptError - The bind completed successfully, but the user must be informed. | | AADSTS100007 | AAD Regional ONLY supports auth either for MSIs OR for requests from MSAL using SN+I for 1P apps or 3P apps in Microsoft infrastructure tenants.|
-| AADSTS1000031 | Application {appDisplayName} cannot be accessed at this time. Contact your administrator. |
+| AADSTS1000031 | Application {appDisplayName} can't be accessed at this time. Contact your administrator. |
| AADSTS7000112 | UnauthorizedClientApplicationDisabled - The application is disabled. |
-| AADSTS7000114| Application 'appIdentifier' is not allowed to make application on-behalf-of calls.|
-| AADSTS7500529 | The value ΓÇÿSAMLId-GuidΓÇÖ is not a valid SAML ID - Azure AD uses this attribute to populate the InResponseTo attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "id" to the string representation of a GUID. For example, id6c1c178c166d486687be4aaf5e482730 is a valid ID. |
+| AADSTS7000114| Application 'appIdentifier' isn't allowed to make application on-behalf-of calls.|
+| AADSTS7500529 | The value ΓÇÿSAMLId-GuidΓÇÖ isn't a valid SAML ID - Azure AD uses this attribute to populate the InResponseTo attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "id" to the string representation of a GUID. For example, id6c1c178c166d486687be4aaf5e482730 is a valid ID. |
## Next steps
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
Previously updated : 05/25/2021 Last updated : 06/10/2022
When a client acquires an access token to access a protected resource, the clien
Before reading through this article, it's recommended that you go through the following articles:
-* [ID tokens](id-tokens.md) in the Microsoft identity platform.
-* [Access tokens](access-tokens.md) in the Microsoft identity platform.
+- [ID tokens](id-tokens.md) in the Microsoft identity platform.
+- [Access tokens](access-tokens.md) in the Microsoft identity platform.
## Refresh token lifetime
-Refresh tokens have a longer lifetime than access tokens. The default lifetime for the refresh tokens is 24 hours for [single page apps](reference-third-party-cookies-spas.md) and 90 days for all other scenarios. Refresh tokens replace themselves with a fresh token upon every use. The Microsoft identity platform doesn't revoke old refresh tokens when used to fetch new access tokens. Securely delete the old refresh token after acquiring a new one. Refresh tokens need to be stored safely like access tokens or application credentials.
+Refresh tokens have a longer lifetime than access tokens. The default lifetime for the refresh tokens is 24 hours for [single page apps](reference-third-party-cookies-spas.md) and 90 days for all other scenarios. Refresh tokens replace themselves with a fresh token upon every use. The Microsoft identity platform doesn't revoke old refresh tokens when used to fetch new access tokens. Securely delete the old refresh token after acquiring a new one. Refresh tokens need to be stored safely like access tokens or application credentials.
->[!IMPORTANT]
-> Refresh tokens sent to a redirect URI registered as `spa` expire after 24 hours. Additional refresh tokens acquired using the initial refresh token carry over that expiration time, so apps must be prepared to rerun the authorization code flow using an interactive authentication to get a new refresh token every 24 hours. Users do not have to enter their credentials and usually don't even see any related user experience, just a reload of your application. The browser must visit the log-in page in a top-level frame to show the login session. This is due to [privacy features in browsers that block third party cookies](reference-third-party-cookies-spas.md).
+> [!IMPORTANT]
+> Refresh tokens sent to a redirect URI registered as `spa` expire after 24 hours. Additional refresh tokens acquired using the initial refresh token carry over that expiration time, so apps must be prepared to rerun the authorization code flow using an interactive authentication to get a new refresh token every 24 hours. Users don't have to enter their credentials and usually don't even see any related user experience, just a reload of your application. The browser must visit the log-in page in a top-level frame to show the login session. This is due to [privacy features in browsers that block third party cookies](reference-third-party-cookies-spas.md).
## Refresh token expiration
-Refresh tokens can be revoked at any time, because of timeouts and revocations. Your app must handle rejections by the sign-in service gracefully when this occurs. This is done by sending the user to an interactive sign-in prompt to sign in again.
+Refresh tokens can be revoked at any time, because of timeouts and revocations. Your app must handle rejections by the sign-in service gracefully when this occurs. This is done by sending the user to an interactive sign-in prompt to sign in again.
### Token timeouts You can't configure the lifetime of a refresh token. You can't reduce or lengthen their lifetime. Configure sign-in frequency in Conditional Access to define the time periods before a user is required to sign in again. Learn more about [Configuring authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
-Not all refresh tokens follow the rules set in the token lifetime policy. Specifically, refresh tokens used in [single page apps](reference-third-party-cookies-spas.md) are always fixed to 24 hours of activity, as if they have a `MaxAgeSessionSingleFactor` policy of 24 hours applied to them.
+Not all refresh tokens follow the rules set in the token lifetime policy. Specifically, refresh tokens used in [single page apps](reference-third-party-cookies-spas.md) are always fixed to 24 hours of activity, as if they have a `MaxAgeSessionSingleFactor` policy of 24 hours applied to them.
### Revocation
-Refresh tokens can be revoked by the server because of a change in credentials, user action, or admin action. Refresh tokens fall into two classes: tokens issued to confidential clients (the rightmost column) and tokens issued to public clients (all other columns).
+Refresh tokens can be revoked by the server because of a change in credentials, user action, or admin action. Refresh tokens fall into two classes: tokens issued to confidential clients (the rightmost column) and tokens issued to public clients (all other columns).
-| Change | Password-based cookie | Password-based token | Non-password-based cookie | Non-password-based token | Confidential client token |
-||--|-||--||
-| Password expires | Stays alive | Stays alive | Stays alive | Stays alive | Stays alive |
-| Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| User revokes their refresh tokens [via PowerShell](/powershell/module/azuread/revoke-azureadsignedinuserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
-| Admin revokes all refresh tokens for a user [via PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) | Revoked | Revoked |Revoked | Revoked | Revoked |
-| Single sign-out [on web](v2-protocols-oidc.md#single-sign-out) | Revoked | Stays alive | Revoked | Stays alive | Stays alive |
+| Change | Password-based cookie | Password-based token | Non-password-based cookie | Non-password-based token | Confidential client token |
+| -- | | -- | - | | - |
+| Password expires | Stays alive | Stays alive | Stays alive | Stays alive | Stays alive |
+| Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
+| User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
+| Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
+| User revokes their refresh tokens [via PowerShell](/powershell/module/azuread/revoke-azureadsignedinuserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
+| Admin revokes all refresh tokens for a user [via PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
+| Single sign-out [on web](v2-protocols-oidc.md#single-sign-out) | Revoked | Stays alive | Revoked | Stays alive | Stays alive |
## Next steps
-* Learn about [configurable token lifetimes](active-directory-configurable-token-lifetimes.md)
-* Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) for more details on primary refresh tokens.
+- Learn about [configurable token lifetimes](active-directory-configurable-token-lifetimes.md)
+- Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) for more details on primary refresh tokens.
active-directory Hybrid Azuread Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-plan.md
As a first planning step, you should review your environment and determine wheth
- Hybrid Azure AD join isn't supported for Windows Server running the Domain Controller (DC) role. - Hybrid Azure AD join isn't supported on Windows down-level devices when using credential roaming or user profile roaming or mandatory profile. - Server Core OS doesn't support any type of device registration.-- User State Migration Tool (USMT) doesn't work with device registration.
+- User State Migration Tool (USMT) doesn't work with device registration.
### OS imaging considerations
As a first planning step, you should review your environment and determine wheth
If your Windows 10 or newer domain joined devices are [Azure AD registered](concept-azure-ad-register.md) to your tenant, it could lead to a dual state of hybrid Azure AD joined and Azure AD registered device. We recommend upgrading to Windows 10 1803 (with KB4489894 applied) or newer to automatically address this scenario. In pre-1803 releases, you'll need to remove the Azure AD registered state manually before enabling hybrid Azure AD join. In 1803 and above releases, the following changes have been made to avoid this dual state: - Any existing Azure AD registered state for a user would be automatically removed <i>after the device is hybrid Azure AD joined and the same user logs in</i>. For example, if User A had an Azure AD registered state on the device, the dual state for User A is cleaned up only when User A logs in to the device. If there are multiple users on the same device, the dual state is cleaned up individually when those users log in. After removing the Azure AD registered state, Windows 10 will unenroll the device from Intune or other MDM, if the enrollment happened as part of the Azure AD registration via auto-enrollment.-- Azure AD registered state on any local accounts on the device isnΓÇÖt impacted by this change. Only applicable to domain accounts. Azure AD registered state on local accounts isn't removed automatically even after user logon, since the user isn't a domain user.
+- Azure AD registered state on any local accounts on the device isnΓÇÖt impacted by this change. Only applicable to domain accounts. Azure AD registered state on local accounts isn't removed automatically even after user logon, since the user isn't a domain user.
- You can prevent your domain joined device from being Azure AD registered by adding the following registry value to HKLM\SOFTWARE\Policies\Microsoft\Windows\WorkplaceJoin: "BlockAADWorkplaceJoin"=dword:00000001. - In Windows 10 1803, if you have Windows Hello for Business configured, the user needs to reconfigure Windows Hello for Business after the dual state cleanup. This issue has been addressed with KB4512509.
If your Windows 10 or newer domain joined devices are [Azure AD registered](conc
To register devices as hybrid Azure AD join to respective tenants, organizations need to ensure that the SCP configuration is done on the devices and not in AD. More details on how to accomplish this task can be found in the article [Hybrid Azure AD join targeted deployment](hybrid-azuread-join-control.md). It's important for organizations to understand that certain Azure AD capabilities won't work in a single forest, multiple Azure AD tenants configurations. -- [Device writeback](../hybrid/how-to-connect-device-writeback.md) won't work. This configuration affects [Device based Conditional Access for on-premise apps that are federated using ADFS](/windows-server/identity/ad-fs/operations/configure-device-based-conditional-access-on-premises). This configuration also affects [Windows Hello for Business deployment when using the Hybrid Cert Trust model](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust).
+- [Device writeback](../hybrid/how-to-connect-device-writeback.md) won't work. This configuration affects [Device based Conditional Access for on-premises apps that are federated using ADFS](/windows-server/identity/ad-fs/operations/configure-device-based-conditional-access-on-premises). This configuration also affects [Windows Hello for Business deployment when using the Hybrid Cert Trust model](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust).
- [Groups writeback](../hybrid/how-to-connect-group-writeback.md) won't work. This configuration affects writeback of Office 365 Groups to a forest with Exchange installed. - [Seamless SSO](../hybrid/how-to-connect-sso.md) won't work. This configuration affects SSO scenarios that organizations may be using on cross OS or browser platforms, for example iOS or Linux with Firefox, Safari, or Chrome without the Windows 10 extension. - [Hybrid Azure AD join for Windows down-level devices in managed environment](./hybrid-azuread-join-managed-domains.md#enable-windows-down-level-devices) won't work. For example, hybrid Azure AD join on Windows Server 2012 R2 in a managed environment requires Seamless SSO and since Seamless SSO won't work, hybrid Azure AD join for such a setup won't work.
To register devices as hybrid Azure AD join to respective tenants, organizations
- If your environment uses virtual desktop infrastructure (VDI), see [Device identity and desktop virtualization](./howto-device-identity-virtual-desktop-infrastructure.md). -- Hybrid Azure AD join is supported for FIPS-compliant TPM 2.0 and not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with hybrid Azure AD join. Microsoft doesn't provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Contact your hardware OEM for support.
+- Hybrid Azure AD join is supported for FIPS-compliant TPM 2.0 and not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with hybrid Azure AD join. Microsoft doesn't provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Contact your hardware OEM for support.
- Starting from Windows 10 1903 release, TPMs 1.2 aren't used with hybrid Azure AD join and devices with those TPMs will be considered as if they don't have a TPM.
Organizations may want to do a targeted rollout of hybrid Azure AD join before e
## Select your scenario based on your identity infrastructure
-Hybrid Azure AD join works with both, managed and federated environments depending on whether the UPN is routable or non-routable. See bottom of the page for table on supported scenarios.
+Hybrid Azure AD join works with both, managed and federated environments depending on whether the UPN is routable or non-routable. See bottom of the page for table on supported scenarios.
### Managed environment
These scenarios don't require you to configure a federation server for authentic
A federated environment should have an identity provider that supports the following requirements. If you have a federated environment using Active Directory Federation Services (AD FS), then the below requirements are already supported. - **WIAORMULTIAUTHN claim:** This claim is required to do hybrid Azure AD join for Windows down-level devices.-- **WS-Trust protocol:** This protocol is required to authenticate Windows current hybrid Azure AD joined devices with Azure AD.
-When you're using AD FS, you need to enable the following WS-Trust endpoints:
- `/adfs/services/trust/2005/windowstransport`
- `/adfs/services/trust/13/windowstransport`
- `/adfs/services/trust/2005/usernamemixed`
+- **WS-Trust protocol:** This protocol is required to authenticate Windows current hybrid Azure AD joined devices with Azure AD.
+When you're using AD FS, you need to enable the following WS-Trust endpoints:
+ `/adfs/services/trust/2005/windowstransport`
+ `/adfs/services/trust/13/windowstransport`
+ `/adfs/services/trust/2005/usernamemixed`
`/adfs/services/trust/13/usernamemixed`
- `/adfs/services/trust/2005/certificatemixed`
- `/adfs/services/trust/13/certificatemixed`
+ `/adfs/services/trust/2005/certificatemixed`
+ `/adfs/services/trust/13/certificatemixed`
-> [!WARNING]
+> [!WARNING]
> Both **adfs/services/trust/2005/windowstransport** or **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**.
-Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-azuread-join-manual.md).
+Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-azuread-join-manual.md).
## Review on-premises AD users UPN support for hybrid Azure AD join
active-directory Access Reviews External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-external-users.md
In addition to the option of removing unwanted external identities from resource
![upon completion settings](media/access-reviews-external-users/upon-completion-settings.png)
-When creating a new Access Review, in the ΓÇ£Upon completion settingsΓÇ¥ section, for **Action to apply on denied users** you can define **Block users from signing-in for 30 days, then remove user from the tenant**.
+When creating a new Access Review, choose the **Select Teams + groups** option and limit the scope to **Guest users only**. In the ΓÇ£Upon completion settingsΓÇ¥ section, for **Action to apply on denied users** you can define **Block users from signing-in for 30 days, then remove user from the tenant**.
This setting allows you to identify, block, and delete external identities from your Azure AD tenant. External identities who are reviewed and denied continued access by the reviewer will be blocked and deleted, irrespective of the resource access or group membership they have. This setting is best used as a last step after you have validated that the external users in-review no longer carries resource access and can safely be removed from your tenant or if you want to make sure they are removed, irrespective of their standing access. The ΓÇ£Disable and deleteΓÇ¥ feature blocks the external user first, taking away their ability to signing into your tenant and accessing resources. Resource access is not revoked in this stage, and in case you wanted to reinstantiate the external user, their ability to log on can be reconfigured. Upon no further action, a blocked external identity will be deleted from the directory after 30 days, removing the account as well as their access.
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
These triggers to Logic Apps are controlled in a new tab within access package p
1. In the left menu, select **Catalogs**.
-1. In the left menu, select **Custom Extensions (Preview)**.
+1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions (Preview)**.
1. In the header navigation bar, select **Add a Custom Extension**.
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md
In order to revoke the old Token Signing Certificate which AD FS is currently us
1. Connect to the Microsoft Online Service `PS C:\>Connect-MsolService`
- 2. Document both your on-premise and cloud Token Signing Certificate thumbprint and expiration dates.
-`PS C:\>Get-MsolFederationProperty -DomainName <domain>`
+ 2. Document both your on-premises and cloud Token Signing Certificate thumbprint and expiration dates.
+`PS C:\>Get-MsolFederationProperty -DomainName <domain>`
3. Copy down the thumbprint. It will be used later to remove the existing certificates.
-You can also get the thumbprint by using AD FS Management, navigating to Service/Certificates, right-clicking on the certificate, select View certificate and then selecting Details.
+You can also get the thumbprint by using AD FS Management, navigating to Service/Certificates, right-clicking on the certificate, select View certificate and then selecting Details.
## Determine whether AD FS renews the certificates automatically By default, AD FS is configured to generate token signing and token decryption certificates automatically, both at the initial configuration time and when the certificates are approaching their expiration date.
The AutoCertificateRollover property describes whether AD FS is configured to re
## Generating new self-signed certificate if AutoCertificateRollover is set to TRUE
-In this section, you will be creating **two** token-signing certificates. The first will use the **-urgent** flag, which will replace the current primary certificate immediately. The second will be used for the secondary certificate.
+In this section, you will be creating **two** token-signing certificates. The first will use the **-urgent** flag, which will replace the current primary certificate immediately. The second will be used for the secondary certificate.
>[!IMPORTANT] >The reason we are creating two certificates is because Azure holds on to information regarding the previous certificate. By creating a second one, we are forcing Azure to release information about the old certificate and replace it with information about the second certificate.
In this section, you will be creating **two** token-signing certificates. The f
You can use the following steps to generate the new token-signing certificates. 1. Ensure that you are logged on to the primary AD FS server.
- 2. Open Windows PowerShell as an administrator.
+ 2. Open Windows PowerShell as an administrator.
3. Check to make sure that your AutoCertificateRollover is set to True. `PS C:\>Get-AdfsProperties | FL AutoCert*, Certificate*` 4. To generate a new token signing certificate: `Update-ADFSCertificate ΓÇôCertificateType token-signing -Urgent`.
Now that the new certificate has been imported and configured in AD FS, you need
2. Expand **Service** and then select **Certificates**. 3. Click the secondary token signing certificate. 4. In the **Actions** pane, click **Set As Primary**. Click Yes at the confirmation prompt.
-5. Once you promoted the new certificate as the primary certificate, you should remove the old certificate because it can still be used. See the [Remove your old certificates](#remove-your-old-certificates) section below.
+5. Once you promoted the new certificate as the primary certificate, you should remove the old certificate because it can still be used. See the [Remove your old certificates](#remove-your-old-certificates) section below.
### To configure the second certificate as a secondary certificate Now that you have added the first certificate and made it primary and removed the old one, import the second certificate. Then you must configure the certificate as the secondary AD FS token signing certificate
To update the certificate information in Azure AD, run the following command: `U
> If you see an error when running this command, run the following command: `Update-MsolFederatedDomain ΓÇôSupportMultipleDomain`, and then enter the domain name when prompted. ## Replace SSL certificates
-In the event that you need to replace your token-signing certificate because of a compromise, you should also revoke and replace the SSL certificates for AD FS and your WAP servers.
+In the event that you need to replace your token-signing certificate because of a compromise, you should also revoke and replace the SSL certificates for AD FS and your WAP servers.
Revoking your SSL certificates must be done at the certificate authority (CA) that issued the certificate. These certificates are often issued by 3rd party providers such as GoDaddy. For an example, see (Revoke a certificate | SSL Certificates - GoDaddy Help US). For more information, see [How Certificate Revocation Works](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619754(v=ws.10)).
Once the old SSL certificate has been revoked and a new one issued, you can repl
Once you have replaced your old certificates, you should remove the old certificate because it can still be used. To do this, follow the steps below: 1. Ensure that you are logged on to the primary AD FS server.
-2. Open Windows PowerShell as an administrator.
+2. Open Windows PowerShell as an administrator.
4. To remove the old token signing certificate: `Remove-ADFSCertificate ΓÇôCertificateType token-signing -thumbprint <thumbprint>`. ## Updating federation partners who can consume Federation Metadata
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
Title: Secure hybrid access
-description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD.
+description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD.
You can now protect your on-premises and cloud legacy authentication application
- [Azure AD Application Proxy](#secure-hybrid-access-through-azure-ad-application-proxy) -- [Secure hybrid access partners](#secure-hybrid-access-through-azure-ad-partner-integrations)
+- [Secure hybrid access: Secure legacy apps with Azure Active Directory](#secure-hybrid-access-secure-legacy-apps-with-azure-active-directory)
+ - [Secure hybrid access through Azure AD Application Proxy](#secure-hybrid-access-through-azure-ad-application-proxy)
+ - [Secure hybrid access through Azure AD partner integrations](#secure-hybrid-access-through-azure-ad-partner-integrations)
You can bridge the gap and strengthen your security posture across all applications with Azure AD capabilities like [Azure AD Conditional Access](../conditional-access/overview.md) and [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). By having Azure AD as an Identity provider (IDP), you can use modern authentication and authorization methods like [single sign-on (SSO)](what-is-single-sign-on.md) and [multifactor authentication (MFA)](../authentication/concept-mfa-howitworks.md) to secure your on-premises legacy applications. ## Secure hybrid access through Azure AD Application Proxy
-
-Using [Application Proxy](../app-proxy/what-is-application-proxy.md) you can provide [secure remote access](../app-proxy/application-proxy-add-on-premises-application.md) to your on-premises web applications. Your users donΓÇÖt need to use a VPN. Users benefit by easily connecting to their applications from any device after a [SSO](../app-proxy/application-proxy-config-sso-how-to.md#how-to-configure-single-sign-on). Application Proxy provides remote access as a service and allows you to [easily publish your on-premise applications](../app-proxy/application-proxy-add-on-premises-application.md) to users outside the corporate network. It helps you scale your cloud access management without requiring you to modify your on-premises applications. [Plan an Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) deployment as a next step.
-## Secure hybrid access through Azure AD partner integrations
+Using [Application Proxy](../app-proxy/what-is-application-proxy.md) you can provide [secure remote access](../app-proxy/application-proxy-add-on-premises-application.md) to your on-premises web applications. Your users donΓÇÖt need to use a VPN. Users benefit by easily connecting to their applications from any device after a [SSO](../app-proxy/application-proxy-config-sso-how-to.md#how-to-configure-single-sign-on). Application Proxy provides remote access as a service and allows you to [easily publish your applications](../app-proxy/application-proxy-add-on-premises-application.md) to users outside the corporate network. It helps you scale your cloud access management without requiring you to modify your on-premises applications. [Plan an Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) deployment as a next step.
+
+## Secure hybrid access through Azure AD partner integrations
In addition to [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md), Microsoft partners with third-party providers to enable secure access to your on-premises applications and applications that use legacy authentication. ![Illustration of Secure Hybrid Access partner integrations and Application Proxy providing access to legacy and on-premises applications after authentication with Azure AD.](./media/secure-hybrid-access/secure-hybrid-access.png)
-The following partners offer pre-built solutions to support **conditional access policies per application** and provide detailed guidance for integrating with Azure AD.
+The following partners offer pre-built solutions to support **conditional access policies per application** and provide detailed guidance for integrating with Azure AD.
- [Akamai Enterprise Application Access](../saas-apps/akamai-tutorial.md) -- [Citrix Application Delivery Controller (ADC)](../saas-apps/citrix-netscaler-tutorial.md)
+- [Citrix Application Delivery Controller (ADC)](../saas-apps/citrix-netscaler-tutorial.md)
- [Datawiza Access Broker](../manage-apps/datawiza-with-azure-ad.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
A common challenge for developers is the management of secrets, credentials, certificates, keys etc used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
-While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having manage any credentials.
+While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having to manage any credentials.
The following video shows how you can use managed identities:</br>
For using Managed identities, you have should do the following:
1. Create a managed identity in Azure. You can choose between system-assigned managed identity or user-assigned managed identity. 2. In case of user-assigned managed identity, assign the managed identity to the "source" Azure Resource, such as an Azure Logic App or an Azure Web App. 3. Authorize the managed identity to have accees to the "target" service.
-4. Use the managed identity to perform access. For this, you can use the Azure SDK with the Azure.Identity library. Some "source" resources offer connectors that know how to use Managed identities for the connections. In that case you simply use the ideantity as a feature of that "source" resource.
+4. Use the managed identity to perform access. For this, you can use the Azure SDK with the Azure.Identity library. Some "source" resources offer connectors that know how to use Managed identities for the connections. In that case you simply use the identity as a feature of that "source" resource.
## What Azure services support the feature?<a name="which-azure-services-support-managed-identity"></a>
active-directory Cisco Umbrella User Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md
# Tutorial: Configure Cisco Umbrella User Management for automatic user provisioning
-This tutorial describes the steps you need to perform in both Cisco Umbrella User Management and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Cisco Umbrella User Management](https://umbrella.cisco.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Cisco Umbrella User Management and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Cisco Umbrella User Management](https://umbrella.cisco.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both Cisco Umbrella Use
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A [Cisco Umbrella subscription](https://signup.umbrella.com). * A user account in Cisco Umbrella with full admin permissions. ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-1. Determine what data to [map between Azure AD and Cisco Umbrella User Management](../app-provisioning/customize-application-attributes.md).
+1. Determine what data to [map between Azure AD and Cisco Umbrella User Management](../app-provisioning/customize-application-attributes.md).
## Step 2. Import ObjectGUID attribute via Azure AD Connect (Optional) If you have previously provisioned user identities from on-premise AD to Cisco Umbrella and would now like to provision the same users from Azure AD, you will need to synchronize the ObjectGUID attribute so that previously provisioned identities persist in the Umbrella reporting. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD. > [!NOTE] > The on-premise Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
-
-When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not synchronized from on-premise AD to Azure AD by default. To synchronize this attribute, enable the optional **Directory Extension attribute sync** and select the objectGUID attributes for users.
+
+When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not synchronized from on-premises AD to Azure AD by default. To synchronize this attribute, enable the optional **Directory Extension attribute sync** and select the objectGUID attributes for users.
![Azure Active Directory Connect wizard Optional features page](./media/cisco-umbrella-user-management-provisioning-tutorial/active-directory-connect-directory-extension-attribute-sync.png)
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not
1. Log in to [Cisco Umbrella dashboard](https://login.umbrella.com ). Navigate to **Deployments** > **Core Identities** > **Users and Groups**.
-
+ 1. Expand the Azure Active Directory card and click on the **API Keys page**. ![Api](./media/cisco-umbrella-user-management-provisioning-tutorial/keys.png)
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not
![Generate](./media/cisco-umbrella-user-management-provisioning-tutorial/token.png)
-1. The generated token will be displayed only once. Copy and save the URL and the token. These values will be entered in the **Tenant URL** and **Secret Token** fields respectively in the Provisioning tab of your Cisco Umbrella User Management application in the Azure portal.
+1. The generated token will be displayed only once. Copy and save the URL and the token. These values will be entered in the **Tenant URL** and **Secret Token** fields respectively in the Provisioning tab of your Cisco Umbrella User Management application in the Azure portal.
## Step 4. Add Cisco Umbrella User Management from the Azure AD application gallery
-Add Cisco Umbrella User Management from the Azure AD application gallery to start managing provisioning to Cisco Umbrella User Management. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add Cisco Umbrella User Management from the Azure AD application gallery to start managing provisioning to Cisco Umbrella User Management. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-## Step 5. Define who will be in scope for provisioning
+## Step 5. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). * If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 6. Configure automatic user provisioning to Cisco Umbrella User Management
+## Step 6. Configure automatic user provisioning to Cisco Umbrella User Management
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Cisco Umbrella User Management based on user and/or group assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
## Step 7. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment: * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Connector Limitations
-* Cisco Umbrella User Management supports provisioning a maximum of 200 groups. Any groups beyond this number that are in scope may not be provisioned to Cisco Umbrella.
+* Cisco Umbrella User Management supports provisioning a maximum of 200 groups. Any groups beyond this number that are in scope may not be provisioned to Cisco Umbrella.
## Additional resources
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal:
`https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/saml/login`. c. In the **Sign on URL** box, enter a URL in the pattern
- `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/login`.
+ `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/saml/login`.
d. In the **Logout URL** box, enter a URL in the pattern `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port><FQDN>/remote/saml/logout`.
To complete these steps, you'll need the values you recorded earlier:
set single-sign-on-url < Reply URL Reply URL> set single-logout-url <Logout URL> set idp-entity-id <Azure AD Identifier>
+ set idp-single-sign-on-url <Azure AD Identifier>
set idp-single-logout-url <Azure Logout URL> set idp-cert <Base64 SAML Certificate Name> set user-name username
active-directory Qliksense Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qliksense-enterprise-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Qlik Sense Enterprise | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Qlik Sense Enterprise.
+ Title: 'Tutorial: Azure AD SSO integration with Qlik Sense Enterprise Client-Managed'
+description: Learn how to configure single sign-on between Azure Active Directory and Qlik Sense Enterprise Client-Managed.
Previously updated : 12/28/2020 Last updated : 06/13/2022
-# Tutorial: Integrate Qlik Sense Enterprise with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Qlik Sense Enterprise Client-Managed
-In this tutorial, you'll learn how to integrate Qlik Sense Enterprise with Azure Active Directory (Azure AD). When you integrate Qlik Sense Enterprise with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Qlik Sense Enterprise Client-Managed with Azure Active Directory (Azure AD). When you integrate Qlik Sense Enterprise Client-Managed with Azure AD, you can:
* Control in Azure AD who has access to Qlik Sense Enterprise. * Enable your users to be automatically signed-in to Qlik Sense Enterprise with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
+Note that there are two versions of Qlik Sense Enterprise. While this tutorial covers integration with the client-managed releases, a different process is required for Qlik Sense Enterprise SaaS (Qlik Cloud version).
## Prerequisites To get started, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
* Qlik Sense Enterprise single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Qlik Sense Enterprise supports **SP** initiated SSO. * Qlik Sense Enterprise supports **just-in-time provisioning**
-## Adding Qlik Sense Enterprise from the gallery
+## Add Qlik Sense Enterprise from the gallery
To configure the integration of Qlik Sense Enterprise into Azure AD, you need to add Qlik Sense Enterprise from the gallery to your list of managed SaaS apps.
To configure and test Azure AD SSO with Qlik Sense Enterprise, perform the follo
1. **[Create Qlik Sense Enterprise test user](#create-qlik-sense-enterprise-test-user)** - to have a counterpart of Britta Simon in Qlik Sense Enterprise that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign-on URL** textbox, type a URL using the following pattern: `https://<Fully Qualified Domain Name>:443{/virtualproxyprefix}/hub`
-
- b. In the **Identifier** textbox, type a URL using one of the following pattern:
+ a. In the **Identifier** textbox, type a URL using one of the following patterns:
| Identifier | |-| | `https://<Fully Qualified Domain Name>.qlikpoc.com` | | `https://<Fully Qualified Domain Name>.qliksense.com` |
- |
-
- c. In the **Reply URL** textbox, type a URL using the following pattern:
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
`https://<Fully Qualified Domain Name>:443{/virtualproxyprefix}/samlauthn/`
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<Fully Qualified Domain Name>:443{/virtualproxyprefix}/hub`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier, and Reply URL, Which are explained later in this tutorial or contact [Qlik Sense Enterprise Client support team](https://www.qlik.com/us/services/support) to get these values. The default port for the URLs is 443 but you can customize it per your Organization need.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL which are explained later in this tutorial or contact [Qlik Sense Enterprise Client support team](https://www.qlik.com/us/services/support) to get these values. The default port for the URLs is 443 but you can customize it per your Organization need.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable Britta Simon to use Azure single sign-on by grant
Qlik Sense Enterprise supports **just-in-time provisioning**, Users automatically added to the 'USERS' repository of Qlik Sense Enterprise as they use the SSO feature. In addition, clients can use the QMC and create a UDC (User Directory Connector) for pre-populating users in Qlik Sense Enterprise from their LDAP of choice such as Active Directory, and others.
-### Test SSO
+## Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Slack | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Slack'
description: Learn how to configure single sign-on between Azure Active Directory and Slack.
Previously updated : 12/28/2020 Last updated : 06/06/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Slack
+# Tutorial: Azure AD SSO integration with Slack
In this tutorial, you'll learn how to integrate Slack with Azure Active Directory (Azure AD). When you integrate Slack with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Slack supports **SP** initiated SSO
-* Slack supports **Just In Time** user provisioning
-* Slack supports [**Automated** user provisioning](./slack-provisioning-tutorial.md)
+* Slack supports **SP** initiated SSO.
+* Slack supports **Just In Time** user provisioning.
+* Slack supports [**Automated** user provisioning](./slack-provisioning-tutorial.md).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
3. If you want to setup Slack manually, in a different web browser window, sign in to your Slack company site as an administrator.
-2. Navigate to **Microsoft Azure AD** then go to **Team Settings**.
+2. click on your workspace name in the top left, then go to **Settings & administration** -> **Workspace settings**.
- ![Configure single sign-on On Microsoft Azure AD](./media/slack-tutorial/tutorial-slack-team-settings.png)
+ ![Screenshot of Configure single sign-on On Microsoft Azure AD.](./media/slack-tutorial/tutorial-slack-team-settings.png)
-3. In the **Team Settings** section, click the **Authentication** tab, and then click **Change Settings**.
+3. In the **Settings & permissions** section, click the **Authentication** tab, and then click **Configure** button at SAML authentication method.
- ![Configure single sign-on On Team Settings](./media/slack-tutorial/tutorial-slack-authentication.png)
+ ![Screenshot of Configure single sign-on On Team Settings.](./media/slack-tutorial/tutorial-slack-authentication.png)
-4. On the **SAML Authentication Settings** dialog, perform the following steps:
+4. On the **Configure SAML authentication for Azure** dialog, perform the below steps:
- ![Configure single sign-on On SAML Authentication Settings](./media/slack-tutorial/tutorial-slack-save-authentication.png)
+ ![Screenshot of Configure single sign-on On SAML Authentication Settings.](./media/slack-tutorial/tutorial-slack-save-authentication.png)
- a. In the **SAML 2.0 Endpoint (HTTP)** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
+ a. In the top right, toggle **Test** mode on.
+
+ b. In the **SAML SSO URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
+
+ c. In the **Identity provider issuer** textbox, paste the value of **Azure Ad Identifier**, which you have copied from Azure portal.
+
+ d. Open your downloaded certificate file in Notepad, copy the content of it into your clipboard, and then paste it to the **Public Certificate** textbox.
+
+1. Expand the **Advanced options** and perform the below steps:
+
+ ![Screenshot of Configure Advanced options single sign-on On App Side.](./media/slack-tutorial/advanced-settings.png)
- b. In the **Identity Provider Issuer** textbox, paste the value of **Azure Ad Identifier**, which you have copied from Azure portal.
+ a. If you need an end-to-end encryption key, tick the box **Sign AuthnRequest** to show the certificate.
- c. Open your downloaded certificate file in Notepad, copy the content of it into your clipboard, and then paste it to the **Public Certificate** textbox.
+ b. Enter `https://slack.com` in the **Service provider issuer** textbox.
- d. Configure the above three settings as appropriate for your Slack team. For more information about the settings, please find the **Slack's SSO configuration guide** here. `https://get.slack.help/hc/articles/220403548-Guide-to-single-sign-on-with-Slack%60`
+ c. Choose how the SAML response from your IDP is signed from the two options.
- ![Configure single sign-on On App Side](./media/slack-tutorial/tutorial-slack-expand.png)
+1. Under **Settings**, decide if members can edit their profile information (like their email or display name) after SSO is enabled. You can also choose whether SSO is required, partially required or optional.
- e. Click on **expand** and enter `https://slack.com` in the **Service provider issuer** textbox.
+ ![Screenshot of Configure Save configuration single sign-on On App Side.](./media/slack-tutorial/save-configuration-button.png)
- f. Click **Save Configuration**.
+1. Click **Save Configuration**.
> [!NOTE] > If you have more than one Slack instance that you need to integrate with Azure AD, set `https://<DOMAIN NAME>.slack.com` to **Service provider issuer** so that it can pair with the Azure application **Identifier** setting.
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additi
### Enable Active-Active gateways for redundancy
-In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premise VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
+In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](../vpn-gateway/vpn-gateway-highlyavailable.md).
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
For more information on configuring your load balancer in a different subnet, se
You must have the following resource installed: * The Azure CLI
-* The `aks-preview` extension version 0.5.50 or later
* Kubernetes version 1.22.x or above
-#### Install the aks-preview CLI extension
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
- ### Create a Private Link service connection To attach an Azure Private Link service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the example below. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/) design document
Learn more about Kubernetes services at the [Kubernetes services documentation][
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [install-azure-cli]: /cli/azure/install-azure-cli [aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources
-[different-subnet]: #specify-a-different-subnet
+[different-subnet]: #specify-a-different-subnet
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
Learn more about how KEDA works in the [official KEDA documentation][keda-archit
## Installation and version
-KEDA can be added to your Azure Kubernetes Service (AKS) cluster by enabling the KEDA add-on using an [ARM template][keda-arm].
+KEDA can be added to your Azure Kubernetes Service (AKS) cluster by enabling the KEDA add-on using an [ARM template][keda-arm] or [Azure CLI][keda-cli].
The KEDA add-on provides a fully supported installation of KEDA that is integrated with AKS.
For general KEDA questions, we recommend [visiting the FAQ overview][keda-faq].
## Next steps * [Enable the KEDA add-on with an ARM template][keda-arm]
+* [Enable the KEDA add-on with the Azure CLI][keda-cli]
+* [Troubleshoot KEDA add-on problems][keda-troubleshoot]
* [Autoscale a .NET Core worker processing Azure Service Bus Queue messages][keda-sample] <!-- LINKS - internal --> [keda-azure-cli]: keda-deploy-addon-az-cli.md
+[keda-cli]: keda-deploy-add-on-cli.md
[keda-arm]: keda-deploy-add-on-arm.md
+[keda-troubleshoot]: keda-troubleshoot.md
<!-- LINKS - external --> [keda]: https://keda.sh/
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KE
- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli).
+- Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])
### Register the `AKS-KedaPreview` feature flag
To remove the resource group, and all related resources, use the [Az PowerShell
az group delete --name MyResourceGroup ```
-### Enabling add-on on clusters with self-managed open-source KEDA installations
-
-While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
-
-When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
-
-This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
-
-While there's a possibility that the existing autoscaling will keep on working, there's a risk given it will be configured differently and won't support features such as managed identity.
-
-It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
-
-Following error will be thrown in the operator logs but the installation of KEDA add-on will be completed.
-
-Error logged in now-suppressed non-participating KEDA operator pod:
-the error logged inside the already installed KEDA operator logs.
-E0520 11:51:24.868081 1 leaderelection.go:330] error retrieving resource lock default/operator.keda.sh: config maps "operator.keda.sh" is forbidden: User "system:serviceaccount:default:keda-operator" can't get resource "config maps" in API group "" in the namespace "default"
- ## Next steps
-This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps
+This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
+
+You can troubleshoot troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
<!-- LINKS - internal --> [az-aks-create]: /cli/azure/aks#az-aks-create
This article showed you how to install the KEDA add-on on an AKS cluster, and th
[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials [az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete
+[keda-troubleshoot]: keda-troubleshoot.md
+[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
<!-- LINKS - external --> [kubectl]: https://kubernetes.io/docs/user-guide/kubectl
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
This article shows you how to install the Kubernetes Event-driven Autoscaling (K
- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli).
+- Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])
### Install the extension `aks-preview` Install the `aks-preview` extension in the AKS cluster to make sure you have the latest version of AKS extension before installing KEDA add-on. ```azurecli-- az extension add --upgrade --name aks-preview
+az extension add --upgrade --name aks-preview
``` ### Register the `AKS-KedaPreview` feature flag
az aks update \
--disable-keda ```
-### Enabling add-on on clusters with self-managed open-source KEDA installations
-
-While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
-
-When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
-
-This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
-
-While there's a possibility that the existing autoscaling will keep on working, there's a risk given it will be configured differently and won't support features such as managed identity.
-
-It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
-
-Following error will be thrown in the operator logs but the installation of KEDA add-on will be completed.
-
-Error logged in now-suppressed non-participating KEDA operator pod:
-the error logged inside the already installed KEDA operator logs.
-E0520 11:51:24.868081 1 leaderelection.go:330] error retrieving resource lock default/operator.keda.sh: config maps "operator.keda.sh" is forbidden: User "system:serviceaccount:default:keda-operator" can't get resource "config maps" in API group "" in the namespace "default"
- ## Next steps This article showed you how to install the KEDA add-on on an AKS cluster using Azure CLI. The steps to verify that KEDA add-on is installed and running are included. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
+You can troubleshoot troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
+ [az-aks-create]: /cli/azure/aks#az-aks-create [az aks install-cli]: /cli/azure/aks#az-aks-install-cli [az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials [az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete
+[keda-troubleshoot]: keda-troubleshoot.md
+[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
[kubectl]: https://kubernetes.io/docs/user-guide/kubectl [keda]: https://keda.sh/
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
However, these external scalers aren't supported as part of the add-on and rely
## Next steps * [Enable the KEDA add-on with an ARM template][keda-arm]
+* [Enable the KEDA add-on with the Azure CLI][keda-cli]
+* [Troubleshoot KEDA add-on problems][keda-troubleshoot]
* [Autoscale a .NET Core worker processing Azure Service Bus Queue message][keda-sample] <!-- LINKS - internal --> [aks-support-policy]: support-policies.md
+[keda-cli]: keda-deploy-add-on-cli.md
[keda-arm]: keda-deploy-add-on-arm.md
+[keda-troubleshoot]: keda-troubleshoot.md
<!-- LINKS - external --> [keda-scalers]: https://keda.sh/docs/latest/scalers/
aks Keda Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-troubleshoot.md
+
+ Title: Troubleshooting Kubernetes Event-driven Autoscaling (KEDA) add-on
+description: How to troubleshoot Kubernetes Event-driven Autoscaling add-on
++ Last updated : 8/26/2021+++
+# Kubernetes Event-driven Autoscaling (KEDA) AKS add-on Troubleshooting Guides
+
+When you deploy the KEDA AKS add-on, you could possibly experience problems associated with configuration of the application autoscaler.
+
+The following guide will assist you on how to troubleshoot errors and resolve common problems with the add-on, in addition to the official KEDA [FAQ][keda-faq] & [troubleshooting guide][keda-troubleshooting].
+
+## Verifying and Troubleshooting KEDA components
+
+### Check available KEDA version
+
+You can check the available KEDA version by using the `kubectl` command:
+
+```azurecli-interactive
+kubectl get crd/scaledobjects.keda.sh -o custom-columns='APP:.metadata.labels.app\.kubernetes\.io/version'
+```
+
+An overview will be provided with the installed KEDA version:
+
+```Output
+APP
+2.7.0
+```
+
+### Ensuring the cluster firewall is configured correctly
+
+It might happen that KEDA isn't scaling applications because it can't start up.
+
+When checking the operator logs, you might find errors similar to the following:
+
+```output
+1.6545953013458195e+09 ERROR Failed to get API Group-Resources {"error": "Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"}
+sigs.k8s.io/controller-runtime/pkg/cluster.New
+/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/cluster/cluster.go:160
+sigs.k8s.io/controller-runtime/pkg/manager.New
+/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/manager/manager.go:313
+main.main
+/workspace/main.go:87
+runtime.main
+/usr/local/go/src/runtime/proc.go:255
+1.6545953013459463e+09 ERROR setup unable to start manager {"error": "Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"}
+main.main
+/workspace/main.go:97
+runtime.main
+/usr/local/go/src/runtime/proc.go:255
+```
+
+While in the metric server you might notice that it's not able to start up:
+
+```output
+I0607 09:53:05.297924 1 main.go:147] keda_metrics_adapter "msg"="KEDA Version: 2.7.1"
+I0607 09:53:05.297979 1 main.go:148] keda_metrics_adapter "msg"="KEDA Commit: "
+I0607 09:53:05.297996 1 main.go:149] keda_metrics_adapter "msg"="Go Version: go1.17.9"
+I0607 09:53:05.298006 1 main.go:150] keda_metrics_adapter "msg"="Go OS/Arch: linux/amd64"
+E0607 09:53:15.344324 1 logr.go:279] keda_metrics_adapter "msg"="Failed to get API Group-Resources" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
+E0607 09:53:15.344360 1 main.go:104] keda_metrics_adapter "msg"="failed to setup manager" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
+E0607 09:53:15.344378 1 main.go:209] keda_metrics_adapter "msg"="making provider" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
+E0607 09:53:15.344399 1 main.go:168] keda_metrics_adapter "msg"="unable to run external metrics adapter" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
+```
+
+This most likely means that the KEDA add-on isn't able to start up due to a misconfigured firewall.
+
+In order to make sure it runs correctly, make sure to configure the firewall to meet [the requirements][aks-firewall-requirements].
+
+### Enabling add-on on clusters with self-managed open-source KEDA installations
+
+While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
+
+When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
+
+This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
+
+While there's a possibility that the existing autoscaling will keep on working, it introduces a risk given it will be configured differently and won't support features such as managed identity.
+
+It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
+
+In order to determine which metrics adapter is being used by KEDA, use the `kubectl` command:
+
+```azurecli-interactive
+kubectl get APIService/v1beta1.external.metrics.k8s.io -o custom-columns='NAME:.spec.service.name,NAMESPACE:.spec.service.namespace'
+```
+
+An overview will be provided showing the service and namespace that Kubernetes will use to get metrics:
+
+```Output
+NAME NAMESPACE
+keda-operator-metrics-apiserver kube-system
+```
+
+> [!WARNING]
+> If the namespace is not `kube-system`, then the AKS add-on is being ignored and another metric server is being used.
+
+[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
+[keda-troubleshooting]: https://keda.sh/docs/latest/troubleshooting/
+[keda-faq]: https://keda.sh/docs/latest/faq/
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
az aks snapshot create --name MySnapshot --resource-group MyResourceGroup --node
First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below: ```azurecli-interactive
-SNAPSHOT_ID=$(az aks snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
+SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
``` Now, we can use the command below to add a new node pool based off of this snapshot.
You can upgrade a node pool to a snapshot configuration so long as the snapshot
First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below: ```azurecli-interactive
-SNAPSHOT_ID=$(az aks snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
+SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
``` Now, we can use this command to upgrade this node pool to this snapshot configuration.
When you create a cluster from a snapshot, the cluster original system pool will
First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below: ```azurecli-interactive
-SNAPSHOT_ID=$(az aks snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
+SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
``` Now, we can use this command to create this cluster off of the snapshot configuration.
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
The `retry` policy executes its child policies once and then retries their execu
### Example
-In the following example, request forwarding is retried up to ten times using an exponential retry algorithm. Since `first-fast-retry` is set to false, all retry attempts are subject to the exponential retry algorithm.
+In the following example, request forwarding is retried up to ten times using an exponential retry algorithm. Since `first-fast-retry` is set to false, all retry attempts are subject to exponentially increasing retry wait times (in this example, approximately 10 seconds, 20 seconds, 40 seconds, ...), up to a maximum wait of `max-interval`.
```xml
In the following example, sending a request to a URL other than the defined back
| delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. | No | N/A | | first-fast-retry | If set to `true` , the first retry attempt is performed immediately. | No | `false` |
-> [!NOTE]
-> When only the `interval` is specified, **fixed** interval retries are performed.
-> When only the `interval` and `delta` are specified, a **linear** interval retry algorithm is used, where wait time between retries is calculated according the following formula - `interval + (count - 1)*delta`.
-> When the `interval`, `max-interval` and `delta` are specified, **exponential** interval retry algorithm is applied, where the wait time between the retries is growing exponentially from the value of `interval` to the value `max-interval` according to the following formula - `min(interval + (2^count - 1) * random(delta * 0.8, delta * 1.2), max-interval)`.
+#### Retry wait times
+
+* When only the `interval` is specified, **fixed** interval retries are performed.
+* When only the `interval` and `delta` are specified, a **linear** interval retry algorithm is used. The wait time between retries increases according to the following formula: `interval + (count - 1)*delta`.
+* When the `interval`, `max-interval` and `delta` are specified, an **exponential** interval retry algorithm is applied. The wait time between the retries increases exponentially according to the following formula: `interval + (2^count - 1) * random(delta * 0.8, delta * 1.2)`, up to a maximum interval set by `max-interval`.
+
+ For example, when `interval` and `delta` are both set to 10 seconds, and `max-interval` is 100 seconds, the approximate wait time between retries increases as follows: 10 seconds, 20 seconds, 40 seconds, 80 seconds, with 100 seconds wait time used for remaining retries.
### Usage
app-service App Service Asp Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-asp-net-migration.md
# .NET migration cases for Azure App Service
-Azure App Service provides easy-to-use tools to quickly discover on-premise .NET web apps, assess for readiness, and migrate both the content & supported configurations to App Service.
+Azure App Service provides easy-to-use tools to quickly discover on-premises .NET web apps, assess for readiness, and migrate both the content & supported configurations to App Service.
These tools are developed to support different kinds of scenarios, focused on discovery, assessment, and migration. Following is list of .NET migration tools and use cases.
The [app containerization tool](https://azure.microsoft.com/blog/accelerate-appl
## Next steps
-[Migrate an on-premise web application to Azure App Service](/learn/modules/migrate-app-service-migration-assistant/)
+[Migrate an on-premises web application to Azure App Service](/learn/modules/migrate-app-service-migration-assistant/)
app-service App Service Java Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-java-migration.md
# Java migration resources for Azure App Service
-Azure App Service provides tools to discover web apps deployed to on-premise web servers. You can assess these apps for readiness, then migrate them to App Service. Both the web app content and supported configuration can be migrated to App Service. These tools are developed to support a wide variety of scenarios focused on discovery, assessment, and migration.
+Azure App Service provides tools to discover web apps deployed to on-premises web servers. You can assess these apps for readiness, then migrate them to App Service. Both the web app content and supported configuration can be migrated to App Service. These tools are developed to support a wide variety of scenarios focused on discovery, assessment, and migration.
## Java Tomcat migration (Linux)
-[Download the assistant](https://azure.microsoft.com/services/app-service/migration-assistant/) to migrate a Java app running on Apache Tomcat web server. You can also use Azure Container Registry to migrate on-premise Linux Docker containers to App Service.
+[Download the assistant](https://azure.microsoft.com/services/app-service/migration-assistant/) to migrate a Java app running on Apache Tomcat web server. You can also use Azure Container Registry to migrate on-premises Linux Docker containers to App Service.
| Resources | |--|
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
> > Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. For more information, See [**Request approval to run container**](form-recognizer-container-install-run.md#request-approval-to-run-the-container).
-With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premise and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
+With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premises and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
## Configuration settings
container_name: azure-cognitive-service-receipt image: cognitiveservicespreview.azurecr.io/microsoft/cognitive-services-form-recognizer-receipt:2.1 environment:
- - EULA=accept
+ - EULA=accept
- billing={FORM_RECOGNIZER_ENDPOINT_URI} - key={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
container_name: azure-cognitive-service-read image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2 environment:
- - EULA=accept
+ - EULA=accept
- billing={COMPUTER_VISION_ENDPOINT_URI} - key={COMPUTER_VISION_KEY} networks:
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
The following host machine requirements are applicable to **train and analyze**
| Custom API| 0.5 cores, 0.5-GB memory| 1 cores, 1-GB memory | |Custom Supervised | 4 cores, 2-GB memory | 8 cores, 4-GB memory|
-If you're only making analyze calls, the host machine requirements are as follows:
-
-| Container | Minimum | Recommended |
-|--||-|
-|Custom Supervised (Analyze) | 1 core, 0.5-GB | 2 cores, 1-GB memory |
- * Each core must be at least 2.6 gigahertz (GHz) or faster. * Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker compose` or `docker run` command.
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
After you've called the [**Analyze document**](https://westus.dev.cognitive.micr
#### GET request ```bash
-<<<<<<< HEAD
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
-=======
curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
->>>>>>> resolve-merge-conflict
``` #### Examine the response
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
Title: "Azure Arc-enabled data services validation"
Previously updated : 09/30/2021 Last updated : 06/14/2022
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Dell EMC PowerFlex |1.19.7|v1.0.0_2021-07-30|15.0.2148.140 | Not validated |
-| PowerFlex version 3.6 |1.19.7|v1.0.0_2021-07-30|15.0.2148.140 | Not validated |
-| PowerFlex CSI version 1.4 |1.19.7|v1.0.0_2021-07-30|15.0.2148.140 | Not validated |
+| Dell EMC PowerFlex |1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
+| PowerFlex version 3.6 |1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
+| PowerFlex CSI version 1.4 |1.21.5|v1.4.1_2022-03-08 | Not validated |
| PowerStore X|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1) |
-| Powerstore T|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1)|
+| PowerStore T|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1)|
+
+### HPE
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+|HPE|1.20.0|v1.6.0_2022-05-02|16.0.41.7337|12.3 (Ubuntu 12.3-1)
+
+### Kublr
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+|Kublr |1.22.0 / 1.20.12 |v1.1.0_2021-11-02 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
+
+### Lenovo
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2|v1.0.0_2021-07-30 |15.0.2148.140|Not validated|
### Nutanix |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV:20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | v1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
+| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | v1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
### Platform 9 |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | v1.0.0_2021-07-30| 15.0.2148.140 | Not validated |
+| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | v1.0.0_2021-07-30| 15.0.2195.191 | PostgreSQL 12.3 (Ubuntu 12.3-1) |
### PureStorage |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Portworx Enterprise 2.7 | 1.20.7 | v1.0.0_2021-07-30 | 15.0.2148.140 | Not validated |
+| Portworx Enterprise 2.7 1.22.5 | 1.20.7 | v1.1.0_2021-11-02 | 15.0.2148.140 | Not validated |
### Red Hat
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| TKGm v1.3.1 | 1.20.5 | v1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
+| TKGm v1.5.1 | 1.20.5 | v1.4.1_2022-03-08 | 15.0.2255.119|postgres 12.3 (Ubuntu 12.3-1)|
+
+### WindRiver
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+|WindRiver| 1.18.1|v1.1.0_2021-11-02 |15.0.2195.191|postgres 12.3 (Ubuntu 12.3-1) |
## Data services validation process
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
keywords: "Kubernetes, Arc, Azure, containers"
# What is Azure Arc-enabled Kubernetes?
-Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premise data center (such as VMware vSphere or Azure Stack HCI) to Azure Arc.
+Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premises data center (such as VMware vSphere or Azure Stack HCI) to Azure Arc.
When you connect a Kubernetes cluster to Azure Arc, it will:
Azure Arc-enabled Kubernetes supports the following scenarios for connected clus
* [Connect Kubernetes](quickstart-connect-cluster.md) running outside of Azure for inventory, grouping, and tagging.
-* Deploy applications and apply configuration using [GitOps-based configuration management](tutorial-use-gitops-connected-cluster.md).
+* Deploy applications and apply configuration using [GitOps-based configuration management](tutorial-use-gitops-connected-cluster.md).
* View and monitor your clusters using [Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json).
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 06/07/2022 Last updated : 06/13/2022 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
spec:
app.kubernetes.io/name: flux-extension ```
+### Flux v2 - `microsoft.flux` extension installation CPU and memory limits
+
+The controllers installed in your Kubernetes cluster with the Microsoft.Flux extension require the following CPU and memory resource limits to properly schedule on Kubernetes cluster nodes.
+
+| Container Name | CPU limit | Memory limit |
+| -- | -- | -- |
+| fluxconfig-agent | 50m | 150Mi |
+| fluxconfig-controller | 100m | 150Mi |
+| fluent-bit | 20m | 150Mi |
+| helm-controller | 1000m | 1Gi |
+| source-controller | 1000m | 1Gi |
+| kustomize-controller | 1000m | 1Gi |
+| notification-controller | 1000m | 1Gi |
+| image-automation-controller | 1000m | 1Gi |
+| image-reflector-controller | 1000m | 1Gi |
+
+If you have enabled a custom or built-in Azure Gatekeeper Policy, such as `Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits`, that limits the resources for containers on Kubernetes clusters, you will need to either ensure that the resource limits on the policy are greater than the limits shown above or the `flux-system` namespace is part of the `excludedNamespaces` parameter in the policy assignment.
++ ## Monitoring Azure Monitor for Containers requires its DaemonSet to run in privileged mode. To successfully set up a Canonical Charmed Kubernetes cluster for monitoring, run the following command:
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
You can learn more about the technical details of the Netherite storage provider
## <a name="mssql"></a>Microsoft SQL Server (MSSQL) (preview)
-The Microsoft SQL Server (MSSQL) storage provider persists all state into a Microsoft SQL Server database. It's compatible with both on-premise and cloud-hosted deployments of SQL Server, including [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
+The Microsoft SQL Server (MSSQL) storage provider persists all state into a Microsoft SQL Server database. It's compatible with both on-premises and cloud-hosted deployments of SQL Server, including [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
The key benefits of the MSSQL storage provider include:
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Connection string for storage account where the function app code and configurat
||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-This setting is used for Consumption and Premium plan apps on both Windows and Linux. It's not used for Dedicated plan apps, which aren't dynamically scaled by Functions.
+This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
The file path to the function app code and configuration in an event-driven scal
||| |WEBSITE_CONTENTSHARE|`functionapp091999e2`|
-This setting is used for Consumption and Premium plan apps on both Windows and Linux. It's not used for Dedicated plan apps, which aren't dynamically scaled by Functions.
+This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
azure-functions Functions Identity Access Azure Sql With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-access-azure-sql-with-managed-identity.md
description: Learn how to connect Azure SQL bindings through managed identity. Previously updated : 1/28/2022 Last updated : 6/13/2022
Enabling Azure AD authentication can be completed via the Azure portal, PowerShe
1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az-ad-user-list) and replace *\<user-principal-name>*. The result is saved to a variable.
+ For Azure CLI 2.37.0 and newer:
+
+ ```azurecli-interactive
+ azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].id --output tsv)
+ ```
+
+ For older versions of Azure CLI:
+ ```azurecli-interactive azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv) ```
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
# Azure Monitor agent overview
-The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of [supported infrastucture](#supported-resource-types) and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
+The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of [supported infrastucture](#supported-resource-types) and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs) ## Relationship to other agents Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor. - [Log Analytics agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports VM insights and monitoring solutions.-- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).
+- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).
- [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage. **Currently**, the Azure Monitor agent consolidates features from the Telegraf agent and Log Analytics agent, with [a few limitations](#current-limitations).
-In future, it will also consolidate features from the Diagnostic extensions.
+In future, it will also consolidate features from the Diagnostic extensions.
In addition to consolidating this functionality into a single agent, the Azure Monitor agent provides the following benefits over the existing agents: -- **Cost savings:**
+- **Cost savings:**
- Granular targeting via [Data Collection Rules](../essentials/data-collection-rule-overview.md) to collect specific data types from specific machines, as compared to the "all or nothing" mode that Log Analytics agent supports - Use XPath queries to filter Windows events that get collected. This helps further reduce ingestion and storage costs. - **Simplified management of data collection:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](#data-sources-and-destinations). Additionally, every action across the data collection lifecycle, from onboarding to deployment to updates, is significantly easier, scalable, and centralized (in Azure) using data collection rules
The Azure Monitor agent uses [data collection rules](../essentials/data-collecti
## Should I switch to the Azure Monitor agent? To start transitioning your VMs off the current agents to the new agent, consider the following factors: -- **Environment requirements:** The Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will only be provided in this new agent. If the Azure Monitor agent supports your current environment, start transitioning to it.
+- **Environment requirements:** The Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will only be provided in this new agent. If the Azure Monitor agent supports your current environment, start transitioning to it.
-- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality. View [current limitations](#current-limitations) and [supported solutions](#supported-services-and-features).
+- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality. View [current limitations](#current-limitations) and [supported solutions](#supported-services-and-features).
+
+ That said, most new capabilities in Azure Monitor will be made available only with the Azure Monitor agent. Review whether the Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
- That said, most new capabilities in Azure Monitor will be made available only with the Azure Monitor agent. Review whether the Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
-
If the Azure Monitor agent has all the core capabilities you require, start transitioning to it. If there are critical features that you require, continue with the current agent until the Azure Monitor agent reaches parity.-- **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
-
+- **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
+ Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported until the retirement date. ## Coexistence with other agents The Azure Monitor agent can coexist (run side by side on the same machine) with the legacy Log Analytics agents so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin transition given the limitations, you must review the below points carefully: - Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data. As such, ensure you're not collecting the same data from both agents. If you are, ensure they're **collecting from different machines** or **going to separate destinations**. - Besides data duplication, this would also generate more charges for data ingestion and retention.-- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth.
+- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth.
> [!NOTE] > When using both agents during evaluation or migration, you can use the **'Category'** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for 'Azure Monitor Agent'.
The Azure Monitor agent can coexist (run side by side on the same machine) with
| Resource type | Installation method | Additional information | |:|:|:| | Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
-| On-premise servers (Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing [Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent using Azure extension framework, provided for on-premise by first installing [Arc agent](../../azure-arc/servers/deployment-options.md) |
+| On-premises servers (Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing [Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent using Azure extension framework, provided for on-premises by first installing [Arc agent](../../azure-arc/servers/deployment-options.md) |
| Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer | | Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption |
The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log A
| Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system | | Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine. |
-<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.
+<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.
<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including **Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format)**. ## Supported services and features
-The following table shows the current support for the Azure Monitor agent with other Azure services.
+The following table shows the current support for the Azure Monitor agent with other Azure services.
| Azure service | Current support | More information | |:|:|:|
The Azure Monitor agent supports Azure service tags (both *AzureMonitor* and *Az
### Firewall requirements | Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection| |||||--|--|
-| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
-| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure Government |global.handler.control.monitor.azure.us |Access control service|Port 443 |Outbound|Yes |
-| Azure Government |`<virtual-machine-region-name>`.handler.control.monitor.azure.us |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure Government |`<log-analytics-workspace-id>`.ods.opinsights.azure.us |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure China |global.handler.control.monitor.azure.cn |Access control service|Port 443 |Outbound|Yes |
-| Azure China |`<virtual-machine-region-name>`.handler.control.monitor.azure.cn |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure China |`<log-analytics-workspace-id>`.ods.opinsights.azure.cn |Ingest logs data |Port 443 |Outbound|Yes |
+| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
+| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
+| Azure Government |global.handler.control.monitor.azure.us |Access control service|Port 443 |Outbound|Yes |
+| Azure Government |`<virtual-machine-region-name>`.handler.control.monitor.azure.us |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+| Azure Government |`<log-analytics-workspace-id>`.ods.opinsights.azure.us |Ingest logs data |Port 443 |Outbound|Yes |
+| Azure China |global.handler.control.monitor.azure.cn |Access control service|Port 443 |Outbound|Yes |
+| Azure China |`<virtual-machine-region-name>`.handler.control.monitor.azure.cn |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+| Azure China |`<log-analytics-workspace-id>`.ods.opinsights.azure.cn |Ingest logs data |Port 443 |Outbound|Yes |
If using private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
### Log Analytics gateway configuration
-1. Follow the instructions above to configure proxy settings on the agent and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
-2. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
- `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`
- (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-3. Add the **data ingestion endpoint URL** to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`
-3. Restart the **OMS Gateway** service to apply the changes
- `Stop-Service -Name <gateway-name>`
- `Start-Service -Name <gateway-name>`
+1. Follow the instructions above to configure proxy settings on the agent and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
+2. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
+ `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`
+ (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
+3. Add the **data ingestion endpoint URL** to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`
+3. Restart the **OMS Gateway** service to apply the changes
+ `Stop-Service -Name <gateway-name>`
+ `Start-Service -Name <gateway-name>`
### Private link configuration
-To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
+To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
## Next steps
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
With the new client installer available in this preview, you can now collect tel
Both the [generally available extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) and this installer use Data Collection rules to configure the **same underlying agent**. ### Comparison with virtual machine extension
-Here is a comparison between client installer and VM extension for Azure Monitor agent. It also highlights which parts are in preview:
+Here is a comparison between client installer and VM extension for Azure Monitor agent. It also highlights which parts are in preview:
-| Functional component | For VMs/servers via extension | For clients via installer|
+| Functional component | For VMs/servers via extension | For clients via installer|
|:|:|:| | Agent installation method | Via VM extension | Via client installer <sup>preview</sup> | | Agent installed | Azure Monitor Agent | Same |
Here is a comparison between client installer and VM extension for Azure Monitor
| Associating config rules to agents | DCRs associates directly to individual VM resources | DCRs associate to Monitored Object (MO), which maps to all devices within the AAD tenant <sup>preview</sup> | | Data upload to Log Analytics | Via Log Analytics endpoints | Same | | Feature support | All features documented [here](./azure-monitor-agent-overview.md) | Features dependent on AMA agent extension that don't require additional extensions. This includes support for Sentinel Windows Event filtering |
-| [Networking options](./azure-monitor-agent-overview.md#networking) | Proxy support, Private link support | Proxy support only |
+| [Networking options](./azure-monitor-agent-overview.md#networking) | Proxy support, Private link support | Proxy support only |
Here is a comparison between client installer and VM extension for Azure Monitor
| Windows 10, 11 desktops, workstations | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer | | Windows 10, 11 laptops | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption | | Virtual machines, scale sets | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
-| On-premise servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premise by installing Arc agent |
+| On-premise servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premises by installing Arc agent |
## Prerequisites
Here is a comparison between client installer and VM extension for Azure Monitor
5. The device must have access to the following HTTPS endpoints: - global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
- - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
+ - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
(If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)) 6. Existing data collection rule(s) you wish to associate with the devices. If it doesn't exist already, [follow the guidance here to create data collection rule(s)](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi). **Do not associate the rule to any resources yet**.
-
-## Install the agent
+
+## Install the agent
1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below): [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox) 2. Open an elevated admin command prompt window and update path to the location where you downloaded the installer.
-3. To install with **default settings**, run the following command:
+3. To install with **default settings**, run the following command:
```cli msiexec /i AzureMonitorAgentClientSetup.msi /qn ```
Here is a comparison between client installer and VM extension for Azure Monitor
msiexec /i AzureMonitorAgentClientSetup.msi /qn DATASTOREDIR="C:\example\folder" ```
- | Parameter | Description |
+ | Parameter | Description |
|:|:| | INSTALLDIR | Directory path where the agent binaries are installed | | DATASTOREDIR | Directory path where the agent stores its operational logs and data |
Here is a comparison between client installer and VM extension for Azure Monitor
| PROXYADDRESS | Set to Proxy Address. PROXYUSE must be set to "true" to be correctly applied | | PROXYUSEAUTH | Set to "true" if proxy requires authentication | | PROXYUSERNAME | Set to Proxy username. PROXYUSE and PROXYUSEAUTH must be set to "true" |
- | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+ | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
5. Verify successful installation:
- - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
- - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
+ - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
+ - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
6. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating. > [!NOTE]
-> The agent installed with the client installer currently doesn't support updating configuration once it is installed. Uninstall and reinstall AMA to update its configuration.
+> The agent installed with the client installer currently doesn't support updating configuration once it is installed. Uninstall and reinstall AMA to update its configuration.
## Create and associate a 'Monitored Object'
Then, proceed with the instructions below to create and associate them to a Moni
#### 1. Assign ΓÇÿMonitored Object ContributorΓÇÖ role to the operator
-This step grants the ability to create and link a monitored object to a user.
+This step grants the ability to create and link a monitored object to a user.
**Permissions required:** Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin may be needed to perform this step. [Follow these steps to elevate Azure AD Tenant Admin as Azure Tenant Admin](../../role-based-access-control/elevate-access-global-admin.md). It will give the Azure AD admin 'owner' permissions at the root scope. **Request URI**
PUT https://management.azure.com/providers/microsoft.insights/providers/microsof
| Name | In | Type | Description | |:|:|:|:|:|
-| `roleAssignmentGUID` | path | string | Provide any valid guid (you can generate one using https://guidgenerator.com/) |
+| `roleAssignmentGUID` | path | string | Provide any valid guid (you can generate one using https://guidgenerator.com/) |
**Headers** - Authorization: ARM Bearer Token (using ΓÇÿGet-AzAccessTokenΓÇÖ or other method)
PUT https://management.azure.com/providers/microsoft.insights/providers/microsof
} ```
-**Body parameters**
+**Body parameters**
| Name | Description | |:|:| | roleDefinitionId | Fixed value: Role definition ID of the 'Monitored Objects Contributor' role: `/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b` |
-| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It may be the user who elevated at the beginning of step 1, or another user who will perform later steps. |
+| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It may be the user who elevated at the beginning of step 1, or another user who will perform later steps. |
-After this step is complete, **reauthenticate** your session and **reacquire** your ARM bearer token.
+After this step is complete, **reauthenticate** your session and **reacquire** your ARM bearer token.
#### 2. Create Monitored Object This step creates the Monitored Object for the Azure AD Tenant scope. It will be used to represent client devices that are signed with that Azure AD Tenant identity.
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
| Name | In | Type | Description | |:|:|:|:|:|
-| `AADTenantId` | path | string | ID of the Azure AD tenant that the device(s) belong to. The MO will be created with the same ID |
+| `AADTenantId` | path | string | ID of the Azure AD tenant that the device(s) belong to. The MO will be created with the same ID |
**Headers** - Authorization: ARM Bearer Token
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
#### 3. Associate DCR to Monitored Object
-Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating a Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi) to create data collection rule(s) first.
+Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating a Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi) to create data collection rule(s) first.
**Permissions required**: Anyone who has ΓÇÿMonitored Object ContributorΓÇÖ at an appropriate scope can perform this operation, as assigned in step 1. **Request URI**
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
**Request Body** ```JSON {
- "properties":
+ "properties":
{ "dataCollectionRuleId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}" }
You can use any of the following options to check the installed version of the a
- Open **Control Panel** > **Programs and Features** > **Azure Monitor Agent** and see the 'Version' listed - Open **Settings** > **Apps** > **Apps and Features** > **Azure Monitor Agent** and see the 'Version' listed
-### Uninstall the agent
+### Uninstall the agent
You can use any of the following options to check the installed version of the agent: - Open **Control Panel** > **Programs and Features** > **Azure Monitor Agent** and click 'Uninstall'-- Open **Settings** > **Apps** > **Apps and Features** > **Azure Monitor Agent** and click 'Uninstall'
+- Open **Settings** > **Apps** > **Apps and Features** > **Azure Monitor Agent** and click 'Uninstall'
If you face issues during 'Uninstall', refer to [troubleshooting guidance](#troubleshoot) below
-### Update the agent
+### Update the agent
In order to update the version, install the new version you wish to update to.
In order to update the version, install the new version you wish to update to.
### View agent diagnostic logs 1. Rerun the installation with logging turned on and specify the log file name: `Msiexec /I AzureMonitorAgentClientSetup.msi /L*V <log file name>`
-2. Runtime logs are collected automatically either at the default location `C:\Resources\Azure Monitor Agent\` or at the file path mentioned during installation.
+2. Runtime logs are collected automatically either at the default location `C:\Resources\Azure Monitor Agent\` or at the file path mentioned during installation.
- If you can't locate the path, the exact location can be found on the registry as `AMADataRootDirPath` on `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMonitorAgent`.
-3. The 'ServiceLogs' folder contains log from AMA Windows Service, which launches and manages AMA processes
+3. The 'ServiceLogs' folder contains log from AMA Windows Service, which launches and manages AMA processes
4. 'AzureMonitorAgent.MonitoringDataStore' contains data/logs from AMA processes. ### Common issues #### Missing DLL-- Error message: "There's a problem with this Windows Installer package. A DLL required for this installer to complete could not be run. …" -- Ensure you have installed [C++ Redistributable (>2015)](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) before installing AMA:
+- Error message: "There's a problem with this Windows Installer package. A DLL required for this installer to complete could not be run. …"
+- Ensure you have installed [C++ Redistributable (>2015)](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) before installing AMA:
-#### Silent install from command prompt fails
-Make sure to start the installer on administrator command prompt. Silent install can only be initiated from the administrator command prompt.
+#### Silent install from command prompt fails
+Make sure to start the installer on administrator command prompt. Silent install can only be initiated from the administrator command prompt.
-#### Uninstallation fails due to the uninstaller being unable to stop the service
-- If There's an option to try again, do try it again -- If retry from uninstaller doesn't work, cancel the uninstall and stop Azure Monitor Agent service from Services (Desktop Application) -- Retry uninstall
+#### Uninstallation fails due to the uninstaller being unable to stop the service
+- If There's an option to try again, do try it again
+- If retry from uninstaller doesn't work, cancel the uninstall and stop Azure Monitor Agent service from Services (Desktop Application)
+- Retry uninstall
-#### Force uninstall manually when uninstaller doesn't work
-- Stop Azure Monitor Agent service. Then try uninstalling again. If it fails, then proceed with the following steps -- Delete AMA service with "sc delete AzureMonitorAgent" from admin cmd -- Download [this tool](https://support.microsoft.com/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d) and uninstall AMA -- Delete AMA binaries. They're stored in `Program Files\Azure Monitor Agent` by default -- Delete AMA data/logs. They're stored in `C:\Resources\Azure Monitor Agent` by default -- Open Registry. Check `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure Monitor Agent`. If it exists, delete the key.
+#### Force uninstall manually when uninstaller doesn't work
+- Stop Azure Monitor Agent service. Then try uninstalling again. If it fails, then proceed with the following steps
+- Delete AMA service with "sc delete AzureMonitorAgent" from admin cmd
+- Download [this tool](https://support.microsoft.com/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d) and uninstall AMA
+- Delete AMA binaries. They're stored in `Program Files\Azure Monitor Agent` by default
+- Delete AMA data/logs. They're stored in `C:\Resources\Azure Monitor Agent` by default
+- Open Registry. Check `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure Monitor Agent`. If it exists, delete the key.
-## Questions and feedback
+## Questions and feedback
Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the preview on the [Azure Monitor Agent User Community](https://teams.microsoft.com/l/team/19%3af3f168b782f64561b52abe75e59e83bc%40thread.tacv2/conversations?groupId=770d6aa5-c2f7-4794-98a0-84fd6ae7f193&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Create and manage action groups in the Azure portal
-description: Learn how to create and manage action groups in the Azure portal.
+ Title: Manage action groups in the Azure portal
+description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure Functions.
Previously updated : 6/2/2022 Last updated : 06/06/2022 -+
+ - references_regions
+ - kr2b-contr-experiment
# Create and manage action groups in the Azure portal
-An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor, Service Health and Azure Advisor alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user's requirements.
-This article shows you how to create and manage action groups in the Azure portal.
+When Azure Monitor data indicates that there might be a problem with your infrastructure or application, an alert is triggered. Azure Monitor, Azure Service Health, and Azure Advisor then use *action groups* to notify users about the alert and take an action. An action group is a collection of notification preferences that are defined by the owner of an Azure subscription.
+
+This article shows you how to create and manage action groups in the Azure portal. Depending on your requirements, you can configure various alerts to use the same action group or different action groups.
Each action is made up of the following properties:
-* **Type**: The notification or action performed. Examples include sending a voice call, SMS, email; or triggering various types of automated actions. See types later in this article.
-* **Name**: A unique identifier within the action group.
-* **Details**: The corresponding details that vary by *type*.
+- **Type**: The notification that's sent or action that's performed. Examples include sending a voice call, SMS, or email. You can also trigger various types of automated actions. For detailed information about notification and action types, see [Action-specific information](#action-specific-information), later in this article.
+- **Name**: A unique identifier within the action group.
+- **Details**: The corresponding details that vary by type.
-For information on how to use Azure Resource Manager templates to configure action groups, see [Action group Resource Manager templates](./action-groups-create-resource-manager-template.md).
+For information about how to use Azure Resource Manager templates to configure action groups, see [Action group Resource Manager templates](./action-groups-create-resource-manager-template.md).
-Action Group is **Global** service, therefore there's no dependency on a specific Azure region. Requests from client can be processed by action group service in any region, which means, if one region of service is down, the traffic will be routed and process by other regions automatically. Being a *global service* it helps client not to worry about **disaster recovery**.
+An action group is a **global** service, so there's no dependency on a specific Azure region. Requests from clients can be processed by action group services in any region. For instance, if one region of the action group service is down, the traffic is automatically routed and processed by other regions. As a global service, an action group helps provide a **disaster recovery** solution.
## Create an action group by using the Azure portal
-1. In the [Azure portal](https://portal.azure.com), search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.
+1. Go to the [Azure portal](https://portal.azure.com).
-1. Select **Alerts**, then select **Manage actions**.
+1. Search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.
- ![Manage Actions button](./media/action-groups/manage-action-groups.png)
+1. Select **Alerts**, and then select **Action groups**.
-1. Select **Add action group**, and fill in the relevant fields in the wizard experience.
+ :::image type="content" source="./media/action-groups/manage-action-groups.png" alt-text="Screenshot of the Alerts page in the Azure portal. The Action groups button is called out.":::
- ![The "Add action group" command](./media/action-groups/add-action-group.PNG)
+1. Select **Create**.
-### Configure basic action group settings
+ :::image type="content" source="./media/action-groups/create-action-group.png" alt-text="Screenshot of the Action groups page in the Azure portal. The Create button is called out.":::
-Under **Project details**:
+1. Enter information as explained in the following sections.
-Select the **Subscription** and **Resource group** in which the action group is saved.
+### Configure basic action group settings
-Under **Instance details**:
+1. Under **Project details**, select values for **Subscription** and **Resource group**. The action group is saved in the subscription and resource group that you select.
-1. Enter an **Action group name**.
+1. Under **Instance details**, enter values for **Action group name** and **Display name**. The display name is used in place of a full action group name when the group is used to send notifications.
-1. Enter a **Display name**. The display name is used in place of a full action group name when notifications are sent using this group.
+ :::image type="content" source="./media/action-groups/action-group-1-basics.png" alt-text="Screenshot of the Create action group dialog box. Values are visible in the Subscription, Resource group, Action group name, and Display name boxes.":::
- ![The "Add action group" dialog box](./media/action-groups/action-group-1-basics.png)
+### Configure notifications
+1. To open the **Notifications** tab, select **Next: Notifications**. Alternately, at the top of the page, select the **Notifications** tab.
-### Configure notifications
+1. Define a list of notifications to send when an alert is triggered. Provide the following information for each notification:
-1. Click the **Next: Notifications >** button to move to the **Notifications** tab, or select the **Notifications** tab at the top of the screen.
+ - **Notification type**: Select the type of notification that you want to send. The available options are:
-1. Define a list of notifications to send when an alert is triggered. Provide the following for each notification:
+ - **Email Azure Resource Manager Role**: Send an email to users who are assigned to certain subscription-level Azure Resource Manager roles.
+ - **Email/SMS message/Push/Voice**: Send various notification types to specific recipients.
- a. **Notification type**: Select the type of notification you want to send. The available options are:
- * Email Azure Resource Manager Role - Send an email to users assigned to certain subscription-level ARM roles.
- * Email/SMS/Push/Voice - Send these notification types to specific recipients.
+ - **Name**: Enter a unique name for the notification.
- b. **Name**: Enter a unique name for the notification.
+ - **Details**: Based on the selected notification type, enter an email address, phone number, or other information.
- c. **Details**: Based on the selected notification type, enter an email address, phone number, etc.
+ - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md).
- d. **Common alert schema**: You can choose to enable the [common alert schema](./alerts-common-schema.md), which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor.
+ :::image type="content" source="./media/action-groups/action-group-2-notifications.png" alt-text="Screenshot of the Notifications tab of the Create action group dialog box. Configuration information for an email notification is visible.":::
- ![The Notifications tab](./media/action-groups/action-group-2-notifications.png)
+1. Select OK.
### Configure actions
-1. Click the **Next: Actions >** button to move to the **Actions** tab, or select the **Actions** tab at the top of the screen.
+1. To open the **Actions** tab, select **Next: Actions**. Alternately, at the top of the page, select the **Actions** tab.
+
+1. Define a list of actions to trigger when an alert is triggered. Provide the following information for each action:
-1. Define a list of actions to trigger when an alert is triggered. Provide the following for each action:
+ - **Action type**: Select from the following types of actions:
- a. **Action type**: Select Automation Runbook, Azure Function, ITSM, Logic App, Secure Webhook, Webhook.
+ - An Azure Automation runbook
+ - An Azure Functions function
+ - A notification that's sent to Azure Event Hubs
+ - A notification that's sent to an IT service management (ITSM) tool
+ - An Azure Logic Apps workflow
+ - A secure webhook
+ - A webhook
- b. **Name**: Enter a unique name for the action.
+ - **Name**: Enter a unique name for the action.
- c. **Details**: Based on the action type, enter a webhook URI, Azure app, ITSM connection, or Automation Runbook. For ITSM Action, additionally specify **Work Item** and other fields your ITSM tool requires.
+ - **Details**: Enter appropriate information for your selected action type. For instance, you might enter a webhook URI, the name of an Azure app, an ITSM connection, or an Automation runbook. For an ITSM action, also enter values for **Work item** and other fields that your ITSM tool requires.
- d. **Common alert schema**: You can choose to enable the [common alert schema](./alerts-common-schema.md), which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor.
+ - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md).
- ![The Actions tab](./media/action-groups/action-group-3-actions.png)
+ :::image type="content" source="./media/action-groups/action-group-3-actions.png" alt-text="Screenshot of the Actions tab of the Create action group dialog box. Several options are visible in the Action type list.":::
### Create the action group
-1. You can explore the **Tags** settings if you like. This lets you associate key/value pairs to the action group for your categorization and is a feature available for any Azure resource.
+1. If you'd like to assign a key-value pair to the action group, select **Next: Tags** or the **Tags** tab. Otherwise, skip this step. By using tags, you can categorize your Azure resources. Tags are available for all Azure resources, resource groups, and subscriptions.
- ![The Tags tab](./media/action-groups/action-group-4-tags.png)
+ :::image type="content" source="./media/action-groups/action-group-4-tags.png" alt-text="Screenshot of the Tags tab of the Create action group dialog box. Values are visible in the Name and Value boxes.":::
-1. Click **Review + create** to review the settings. This will do a quick validation of your inputs to make sure all the required fields are selected. If there are issues, they'll be reported here. Once you've reviewed the settings, click **Create** to provision the action group.
+1. To review your settings, select **Review + create**. This step quickly checks your inputs to make sure you've entered all required information. If there are issues, they're reported here. After you've reviewed the settings, select **Create** to create the action group.
- ![The Review + create tab](./media/action-groups/action-group-5-review.png)
+ :::image type="content" source="./media/action-groups/action-group-5-review.png" alt-text="Screenshot of the Review + create tab of the Create action group dialog box. All configured values are visible.":::
> [!NOTE]
-> When you configure an action to notify a person by email or SMS, they receive a confirmation indicating they have been added to the action group.
+>
+> When you configure an action to notify a person by email or SMS, they receive a confirmation indicating that they have been added to the action group.
+
+### Test an action group in the Azure portal (preview)
+
+When you create or update an action group in the Azure portal, you can **test** the action group.
-### Test an action group in the Azure portal (Preview)
+1. Define an action, as described in the previous few sections. Then select **Review + create**.
-When creating or updating an action group in the Azure portal, you can **test** the action group.
-1. After defining an action, click on **Review + create**. Select *Test action group*.
+1. On the page that lists the information that you entered, select **Test action group**.
- ![The Test Action Group](./media/action-groups/test-action-group.png)
+ :::image type="content" source="./media/action-groups/test-action-group.png" alt-text="Screenshot of the Review + create tab of the Create action group dialog box. A Test action group button is visible.":::
-1. Select the *sample type* and select the notification and action types that you want to test and select **Test**.
+1. Select a sample type and the notification and action types that you want to test. Then select **Test**.
- ![Select Sample Type + notification + action type](./media/action-groups/test-sample-action-group.png)
+ :::image type="content" source="./media/action-groups/test-sample-action-group.png" alt-text="Screenshot of the Test sample action group page. An email notification type and a webhook action type are visible.":::
-1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you won't get test results.
+1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you don't get test results.
- ![Stop running test](./media/action-groups/stop-running-test.png)
+ :::image type="content" source="./media/action-groups/stop-running-test.png" alt-text="Screenshot of the Test sample action group page. A dialog box contains a Stop button and asks the user about stopping the test.":::
-1. When the test is complete either a **Success** or **Failed** test status is displayed. If the test failed, you could select *View details* to get more information.
- ![Test sample failed](./media/action-groups/test-sample-failed.png)
+1. When the test is complete, a test status of either **Success** or **Failed** appears. If the test failed and you'd like to get more information, select **View details**.
-You can use the information in the **Error details section**, to understand the issue so that you can edit and test the action group again.
-To allow you to check the action groups are working as expected before you enable them in a production environment, you'll get email and SMS alerts with the subject: Test.
+ :::image type="content" source="./media/action-groups/test-sample-failed.png" alt-text="Screenshot of the Test sample action group page. Error details are visible, and a white X on a red background indicates that a test failed.":::
-All the details and links in Test email notifications for the alerts fired are a sample set for reference.
+You can use the information in the **Error details** section to understand the issue. Then you can edit and test the action group again.
+
+When you run a test and select a notification type, you get a message with "Test" in the subject. The tests provide a way to check that your action group works as expected before you enable it in a production environment. All the details and links in test email notifications are from a sample reference set.
#### Azure Resource Manager role membership requirements
-The following table describes the role membership requirements to use the *test actions* functionality
-| User's role membersip | Existing Action Group | Existing Resource Group and new Action Group | New Resource Group and new Action Group |
-| - | - | -- | - |
-| Subscription Contribuutor | Supported | Supported | Supported |
-| Resource Group Contributor | Supported | Supported | Not Applicable |
-| Action Group resource Contributor | Supported | Not Applicable | Not Applicable |
-| Azure Monitor Contributor | Supported | Supported | Not Applicable |
-| Custom role | Supported | Supported | Not Applicable |
+The following table describes the role membership requirements that are needed for the *test actions* functionality:
+| User's role membership | Existing action group | Existing resource group and new action group | New resource group and new action group |
+| - | - | -- | - |
+| Subscription contributor | Supported | Supported | Supported |
+| Resource group contributor | Supported | Supported | Not applicable |
+| Action group resource contributor | Supported | Not applicable | Not applicable |
+| Azure Monitor contributor | Supported | Supported | Not applicable |
+| Custom role | Supported | Supported | Not applicable |
> [!NOTE]
-> You may perform a limited number of tests over a time period. See the [rate limiting information](./alerts-rate-limiting.md) article.
>
-> You can opt in or opt out to the common alert schema through Action Groups, on the portal. You can [find common schema samples for test action groups for all the sample types](./alerts-common-schema-test-action-definitions.md).
-> You can [find non-common schema alert definitions](./alerts-non-common-schema-definitions.md).
+> You can run a limited number of tests per time period. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+>
+> When you configure an action group in the portal, you can opt in or out of the common alert schema.
+>
+> - To find common schema samples for all sample types, see [Common alert schema definitions for Test Action Group](./alerts-common-schema-test-action-definitions.md).
+> - To find non-common schema alert definitions, see [Non-common alert schema definitions for Test Action Group](./alerts-non-common-schema-definitions.md).
## Manage your action groups
-After you create an action group, you can view **Action groups** by selecting **Manage actions** from the **Alerts** landing page in **Monitor** pane. Select the action group you want to manage to:
+After you create an action group, you can view it in the portal:
-* Add, edit, or remove actions.
-* Delete the action group.
+1. From the **Monitor** page, select **Alerts**.
+1. Select **Manage actions**.
+1. Select the action group that you want to manage. You can:
+
+ - Add, edit, or remove actions.
+ - Delete the action group.
## Action-specific information
+The following sections provide information about the various actions and notifications that you can configure in an action group.
+ > [!NOTE]
-> See [Subscription Service Limits for Monitoring](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-monitor-limits) for numeric limits on each of the items below.
+>
+> To check numeric limits on each type of action or notification, see [Subscription service limits for monitoring](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-monitor-limits).
-### Automation Runbook
-Refer to the [Azure subscription service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) for limits on Runbook payloads.
+### Automation runbook
-You may have a limited number of Runbook actions in an Action Group.
+To check limits on Automation runbook payloads, see [Automation limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits).
-### Azure app Push Notifications
-Enable push notifications to the [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/) by providing the email address you use as your account ID when configuring the Azure mobile app.
+You may have a limited number of runbook actions per action group.
-You may have a limited number of Azure app actions in an Action Group.
+### Azure app push notifications
+
+To enable push notifications to the Azure mobile app, provide the email address that you use as your account ID when you configure the Azure mobile app. For more information about the Azure mobile app, see [Get the Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).
+
+You might have a limited number of Azure app actions per action group.
### Email
-Emails will be sent from the following email addresses. Ensure that your email filtering is configured appropriately
+
+Ensure that your email filtering is configured appropriately. Emails are sent from the following email addresses:
+
- azure-noreply@microsoft.com - azureemail-noreply@microsoft.com - alerts-noreply@mail.windowsazure.com
-You may have a limited number of email actions in an Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article.
+You may have a limited number of email actions per action group. For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
-### Email Azure Resource Manager Role
-Send email to the members of the subscription's role. Email will only be sent to **Azure AD user** members of the role. Email won't be sent to Azure AD groups or service principals.
+### Email Azure Resource Manager role
+
+When you use this type of notification, you can send email to the members of a subscription's role. Email is only sent to Azure Active Directory (Azure AD) **user** members of the role. Email isn't sent to Azure AD groups or service principals.
A notification email is sent only to the *primary email* address.
-If you aren't receiving Notifications on your *primary email*, then you can try following steps:
+If your *primary email* doesn't receive notifications, take the following steps:
+
+1. In the Azure portal, go to **Active Directory**.
+1. On the left, select **All users**. On the right, a list of users appears.
+1. Select the user whose *primary email* you'd like to review.
-1. In Azure portal, go to *Active Directory*.
-2. Click on All users (in left pane), you will see list of users (in right pane).
-3. Select the user for which you want to review the *primary email* information.
+ :::image type="content" source="media/action-groups/active-directory-user-profile.png" alt-text="Screenshot of the All users page in the Azure portal. On the left, the All users item is selected. Information about one user is visible but is indecipherable." border="true":::
- :::image type="content" source="media/action-groups/active-directory-user-profile.png" alt-text="Example of how to review user profile." border="true":::
+1. In the user profile, look under **Contact info** for an **Email** value. If it's blank:
-4. In User profile under Contact Info if "Email" tab is blank then click on *edit* button on the top and add your *primary email* and hit *save* button on the top.
+ 1. At the top of the page, select **Edit**.
+ 1. Enter an email address.
+ 1. At the top of the page, select **Save**.
- :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Example of how to add primary email." border="true":::
+ :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Screenshot of a user profile page in the Azure portal. The Edit button and the Email box are called out." border="true":::
-You may have a limited number of email actions in an Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article.
+You may have a limited number of email actions per action group. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
-While setting up *Email ARM Role*, you need to make sure below three conditions are met:
+When you set up the Azure Resource Manager role:
-1. The type of the entity being assigned to the role needs to be **"User"**.
-2. The assignment needs to be done at the **subscription** level.
-3. The user needs to have an email configured in their **AAD profile**.
+1. Assign an entity of type **"User"** to the role.
+1. Make the assignment at the **subscription** level.
+1. Make sure an email address is configured for the user in their **Azure AD profile**.
> [!NOTE]
-> It can take upto **24 hours** for customer to start receiving notifications after they add new ARM Role to their subscription.
+>
+> It can take up to **24 hours** for a customer to start receiving notifications after they add a new Azure Resource Manager role to their subscription.
+
+### Event Hubs
+
+An Event Hubs action publishes notifications to Event Hubs. For more information about Event Hubs, see [Azure Event HubsΓÇöA big data streaming platform and event ingestion service](../../event-hubs/event-hubs-about.md). You can subscribe to the alert notification stream from your event receiver.
-### Event Hub
-An event hub action publishes notifications to [Azure Event Hubs](~/articles/event-hubs/event-hubs-about.md). You may then subscribe to the alert notification stream from your event receiver.
+### Functions
-### Function
-Calls an existing HTTP trigger endpoint in [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
+An action that uses Functions calls an existing HTTP trigger endpoint in Functions. For more information about Functions, see [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
-When defining the Function action the Function's httptrigger endpoint and access key are saved in the action definition. For example: `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=this_is_access_key`. If you change the access key for the function, you will need to remove and recreate the Function action in the Action Group.
+When you define the function action, the function's HTTP trigger endpoint and access key are saved in the action definition, for example, `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=<access_key>`. If you change the access key for the function, you need to remove and recreate the function action in the action group.
-You may have a limited number of Function actions in an Action Group.
+You may have a limited number of function actions per action group.
### ITSM
-ITSM Action requires an ITSM Connection. Learn how to create an [ITSM Connection](./itsmc-overview.md).
-You may have a limited number of ITSM actions in an Action Group.
+An ITSM action requires an ITSM connection. To learn how to create an ITSM connection, see [ITSM integration](./itsmc-overview.md).
+
+You might have a limited number of ITSM actions per action group.
+
+### Logic Apps
+
+You may have a limited number of Logic Apps actions per action group.
-### Logic App
-You may have a limited number of Logic App actions in an Action Group.
+### Secure webhook
-### Secure Webhook
-The Action Groups Secure Webhook action enables you to take advantage of Azure Active Directory to secure the connection between your action group and your protected web API (webhook endpoint). The overall workflow for taking advantage of this functionality is described below. For an overview of Azure AD Applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md).
+When you use a secure webhook action, you can use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint. For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md). Follow these steps to take advantage of the secure webhook functionality.
> [!NOTE]
-> Using the webhook action requires that the target webhook endpoint be capable of processing the various JSON payloads emitted by different alert sources.
-> If the webhook endpoint is expecting a specific schema (for example Microsoft Teams) you should use the Logic App action to transform the alert schema to meet the target webhook's expectations.
+>
+> If you use the webhook action, your target webhook endpoint needs to be able to process the various JSON payloads that different alert sources emit. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
+
+1. Create an Azure AD application for your protected web API. For detailed information, see [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md). Configure your protected API to be called by a daemon app, and expose application permissions, not delegated permissions. For more information about these permissions, see [If your web API is called by a service or daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-service-or-daemon-app).
-1. Create an Azure AD Application for your protected web API. See [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md).
- - Configure your protected API to be [called by a daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-service-or-daemon-app).
+ > [!NOTE]
+ >
+ > Configure your protected web API to accept V2.0 access tokens. For detailed information about this setting, see [Azure Active Directory app manifest](../../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute).
- > [!NOTE]
- > Your protected web API must be configured to [accept V2.0 access tokens](../../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute).
+1. To enable the action group to use your Azure AD application, use the PowerShell script that follows this procedure.
-2. Enable Action Group to use your Azure AD Application.
+ > [!NOTE]
+ >
+ > You must be assigned the [Azure AD Application Administrator role](../../active-directory/roles/permissions-reference.md#all-roles) to run this script.
- > [!NOTE]
- > You must be a member of the [Azure AD Application Administrator role](../../active-directory/roles/permissions-reference.md#all-roles) to execute this script.
+ 1. Modify the PowerShell script's `Connect-AzureAD` call to use your Azure AD tenant ID.
+ 1. Modify the PowerShell script's `$myAzureADApplicationObjectId` variable to use the Object ID of your Azure AD application.
+ 1. Run the modified script.
- - Modify the PowerShell script's Connect-AzureAD call to use your Azure AD Tenant ID.
- - Modify the PowerShell script's variable $myAzureADApplicationObjectId to use the Object ID of your Azure AD Application.
- - Run the modified script.
+ > [!NOTE]
+ >
+ > The service principle needs to be assigned an **owner role** of the Azure AD application to be able to create or modify the secure webhook action in the action group.
- > [!NOTE]
- > Service principle need to be a member of **owner role** of Azure AD application to be able to create or modify the Secure Webhook action in the action group.
+1. Configure the secure webhook action.
-3. Configure the Action Group Secure Webhook action.
- - Copy the value $myApp.ObjectId from the script and enter it in the Application Object ID field in the Webhook action definition.
+ 1. Copy the `$myApp.ObjectId` value that's in the script.
+ 1. In the webhook action definition, in the **Object Id** box, enter the value that you copied.
- ![Secure Webhook action](./media/action-groups/action-groups-secure-webhook.png)
+ :::image type="content" source="./media/action-groups/action-groups-secure-webhook.png" alt-text="Screenshot of the Secured Webhook dialog box in the Azure portal. The Object ID box is visible." border="true":::
-#### Secure Webhook PowerShell Script
+#### Secure webhook PowerShell script
```PowerShell Connect-AzureAD -TenantId "<provide your Azure AD tenant ID here>"
-# This is your Azure AD Application's ObjectId.
+# Define your Azure AD application's ObjectId.
$myAzureADApplicationObjectId = "<the Object ID of your Azure AD Application>"
-# This is the Action Group Azure AD AppId
+# Define the action group Azure AD AppId.
$actionGroupsAppId = "461e8683-5575-4561-ac7f-899cc907d62a"
-# This is the name of the new role we will add to your Azure AD Application
+# Define the name of the new role that gets added to your Azure AD application.
$actionGroupRoleName = "ActionGroupsSecureWebhook"
-# Create an application role of given name and description
+# Create an application role with the given name and description.
Function CreateAppRole([string] $Name, [string] $Description) { $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole
Function CreateAppRole([string] $Name, [string] $Description)
return $appRole }
-# Get my Azure AD Application, it's roles and service principal
+# Get your Azure AD application, its roles, and its service principal.
$myApp = Get-AzureADApplication -ObjectId $myAzureADApplicationObjectId $myAppRoles = $myApp.AppRoles $actionGroupsSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $actionGroupsAppId + "'")
$actionGroupsSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $actionGro
Write-Host "App Roles before addition of new role.." Write-Host $myAppRoles
-# Create the role if it doesn't exist
+# Create the role if it doesn't exist.
if ($myAppRoles -match "ActionGroupsSecureWebhook") { Write-Host "The Action Group role is already defined.`n"
else
{ $myServicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $myApp.AppId + "'")
- # Add our new role to the Azure AD Application
+ # Add the new role to the Azure AD application.
$newRole = CreateAppRole -Name $actionGroupRoleName -Description "This is a role for Action Group to join" $myAppRoles.Add($newRole) Set-AzureADApplication -ObjectId $myApp.ObjectId -AppRoles $myAppRoles }
-# Create the service principal if it doesn't exist
+# Create the service principal if it doesn't exist.
if ($actionGroupsSP -match "AzNS AAD Webhook") { Write-Host "The Service principal is already defined.`n" } else {
- # Create a service principal for the Action Group Azure AD Application and add it to the role
+ # Create a service principal for the action group Azure AD application and add it to the role.
$actionGroupsSP = New-AzureADServicePrincipal -AppId $actionGroupsAppId }
Write-Host $myApp.AppRoles
``` ### SMS
-See the [rate limiting information](./alerts-rate-limiting.md) and [SMS alert behavior](./alerts-sms-behavior.md) for additional important information.
-You may have a limited number of SMS actions in an Action Group.
+For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+
+For important information about using SMS notifications in action groups, see [SMS alert behavior in action groups](./alerts-sms-behavior.md).
+
+You might have a limited number of SMS actions per action group.
> [!NOTE]
-> If the Azure portal Action Group user interface does not let you select your country/region code, then SMS is not supported for your country/region. If your country/region code is not available, you can vote to have your country/region added at [user voice](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, a work around is to have your Action Group call a webhook to a third-party SMS provider with support in your country/region.
+>
+> If you can't select your country/region code in the Azure portal, SMS isn't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party SMS provider that offers support in your country/region.
-Pricing for supported countries/regions is listed in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+For information about pricing for supported countries/regions, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-**List of Countries where SMS Notification is supported**
+#### Countries with SMS notification support
-| Country Code | Country Name |
+| Country code | Country |
|:|:| | 61 | Australia | | 43 | Austria |
Pricing for supported countries/regions is listed in the [Azure Monitor pricing
| 1 | United States | ### Voice
-See the [rate limiting information](./alerts-rate-limiting.md) article for additional important behavior.
-You may have a limited number of Voice actions in an Action Group.
+For important information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+
+You might have a limited number of voice actions per action group.
> [!NOTE]
-> If the Azure portal Action Group user interface does not let you select your country/region code, then voice calls are not supported for your country/region. If your country/region code is not available, you can vote to have your country/region added at [user voice](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, a work around is to have your Action Group call a webhook to a third-party voice call provider with support in your country/region.
-> Only Country code supported today in Azure portal Action Group for Voice Notification is +1(United States).
+>
+> If you can't select your country/region code in the Azure portal, voice calls aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party voice call provider that offers support in your country/region.
+>
+> The only country code that action groups currently support for voice notification is +1 for the United States.
-Pricing for supported countries/regions is listed in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+For information about pricing for supported countries/regions, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
### Webhook > [!NOTE]
-> Using the webhook action requires that the target webhook endpoint be capable of processing the various JSON payloads emitted by different alert sources.
-> If the webhook endpoint is expecting a specific schema (for example Microsoft Teams) you should use the Logic App action to transform the alert schema to meet the target webhook's expectations.
+>
+> If you use the webhook action, your target webhook endpoint needs to be able to process the various JSON payloads that different alert sources emit. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
+
+Webhook action groups use the following rules:
-Webhooks are processed using the following rules
-- A webhook call is attempted a maximum of three times.-- The call will be retried if a response is not received within the timeout period or one of the following HTTP status codes is returned: 408, 429, 503 or 504.-- The first call will wait 10 seconds for a response.-- The second and third attempts will wait 30 seconds for a response.-- After the three attempts to call the webhook have failed no Action Group will call the endpoint for 15 minutes.
+- A webhook call is attempted at most three times.
-Please see [Action Group IP Addresses](../app/ip-addresses.md) for source IP address ranges.
+- The first call waits 10 seconds for a response.
+- The second and third attempts wait 30 seconds for a response.
+
+- The call is retried if any of the following conditions are met:
+
+ - A response isn't received within the timeout period.
+ - One of the following HTTP status codes is returned: 408, 429, 503, or 504.
+
+- If three attempts to call the webhook fail, no action group calls the endpoint for 15 minutes.
+
+For source IP address ranges, see [Action group IP addresses](../app/ip-addresses.md).
## Next steps
-* Learn more about [SMS alert behavior](./alerts-sms-behavior.md).
-* Gain an [understanding of the activity log alert webhook schema](./activity-log-alerts-webhook.md).
-* Learn more about [ITSM Connector](./itsmc-overview.md).
-* Learn more about [rate limiting](./alerts-rate-limiting.md) on alerts.
-* Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.
-* Learn how to [configure alerts whenever a service health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
+
+- Learn more about [SMS alert behavior](./alerts-sms-behavior.md).
+- Gain an [understanding of the activity log alert webhook schema](./activity-log-alerts-webhook.md).
+- Learn more about [ITSM Connector](./itsmc-overview.md).
+- Learn more about [rate limiting](./alerts-rate-limiting.md) on alerts.
+- Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.
+- Learn how to [configure alerts whenever a Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Metric counts such as request rate and exception rate are adjusted to compensate
> [!NOTE] > This section applies to ASP.NET applications, not to ASP.NET Core applications. [Learn about configuring adaptive sampling for ASP.NET Core applications later in this document.](#configuring-adaptive-sampling-for-aspnet-core-applications)
+> With ASP.NET Core and with Microsoft.ApplicationInsights.AspNetCore >= 2.15.0 you can configure AppInsights options via appsettings.json
+ In [`ApplicationInsights.config`](./configuration-with-applicationinsights-config.md), you can adjust several parameters in the `AdaptiveSamplingTelemetryProcessor` node. The figures shown are the default values: * `<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>`
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
N/A
|Metric Name|Unit|Supported dimensions| |--|--|--| |Request Success Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Requests Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
+|Requests Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`|
|Request Duration|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Retry Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
+|Retry Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, , `Status Code`|
+|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`|
+|Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Exception Type`|
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] #### Attach Statsbeat
You can also disable this feature by setting the environment variable `APPLICATI
#### [Node](#tab/node)
-N/A
+Not supported yet.
#### [Python](#tab/python)
-N/A
+Not supported yet.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 04/12/2022 Last updated : 06/01/2022
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analyti
This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table.
+## Microsoft.AAD/DomainServices
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|\DirectoryServices(NTDS)\LDAP Searches/sec|Yes|NTDS - LDAP Searches/sec|CountPerSecond|Average|This metric indicates the average number of searches per second for the NTDS object. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\DirectoryServices(NTDS)\LDAP Successful Binds/sec|Yes|NTDS - LDAP Successful Binds/sec|CountPerSecond|Average|This metric indicates the number of LDAP successful binds per second for the NTDS object. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\DNS\Total Query Received/sec|Yes|DNS - Total Query Received/sec|CountPerSecond|Average|This metric indicates the average number of queries received by DNS server in each second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\DNS\Total Response Sent/sec|Yes|Total Response Sent/sec|CountPerSecond|Average|This metric indicates the average number of reponses sent by DNS server in each second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Memory\% Committed Bytes In Use|Yes|% Committed Bytes In Use|Percent|Average|This metric indicates the ratio of Memory\Committed Bytes to the Memory\Commit Limit. Committed memory is the physical memory in use for which space has been reserved in the paging file should it need to be written to disk. The commit limit is determined by the size of the paging file. If the paging file is enlarged, the commit limit increases, and the ratio is reduced. This counter displays the current percentage value only; it is not an average. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Process(dns)\% Processor Time|Yes|% Processor Time (dns)|Percent|Average|This metric indicates the percentage of elapsed time that all of dns process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Process(lsass)\% Processor Time|Yes|% Processor Time (lsass)|Percent|Average|This metric indicates the percentage of elapsed time that all of lsass process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Processor(_Total)\% Processor Time|Yes|Total Processor Time|Percent|Average|This metric indicates the percentage of elapsed time that the processor spends to execute a non-Idle thread. It is calculated by measuring the percentage of time that the processor spends executing the idle thread and then subtracting that value from 100%. (Each processor has an idle thread that consumes cycles when no other threads are ready to run). This counter is the primary indicator of processor activity, and displays the average percentage of busy time observed during the sample interval. It should be noted that the accounting calculation of whether the processor is idle is performed at an internal sampling interval of the system clock (10ms). On todays fast processors, % Processor Time can therefore underestimate the processor utilization as the processor may be spending a lot of time servicing threads between the system clock sampling interval. Workload based timer applications are one example of applications which are more likely to be measured inaccurately as timers are signaled just after the sample is taken. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Security System-Wide Statistics\Kerberos Authentications|Yes|Kerberos Authentications|CountPerSecond|Average|This metric indicates the number of times that clients use a ticket to authenticate to this computer per second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Security System-Wide Statistics\NTLM Authentications|Yes|NTLM Authentications|CountPerSecond|Average|This metric indicates the number of NTLM authentications processed per second for the Active Directory on this domain contrller or for local accounts on this member server. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+ ## microsoft.aadiam/azureADMetrics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ThrottledRequests|No|ThrottledRequests|Count|Average|azureADMetrics type metric|No Dimensions|
+|CACompliantDeviceSuccessCount|Yes|CACompliantDeviceSuccessCount|Count|Count|CA comliant device scuccess count for Azure AD|No Dimensions|
+|CAManagedDeviceSuccessCount|No|CAManagedDeviceSuccessCount|Count|Count|CA domain join device success count for Azure AD|No Dimensions|
+|MFAAttemptCount|No|MFAAttemptCount|Count|Count|MFA attempt count for Azure AD|No Dimensions|
+|MFAFailureCount|No|MFAFailureCount|Count|Count|MFA failure count for Azure AD|No Dimensions|
+|MFASuccessCount|No|MFASuccessCount|Count|Count|MFA success count for Azure AD|No Dimensions|
## Microsoft.AnalysisServices/servers
This latest update adds a new column and reorders the metrics to be alphabetical
|WebSocketMessages|Yes|WebSocket Messages (Preview)|Count|Total|Count of WebSocket messages based on selected source and destination|Location, Source, Destination|
+## Microsoft.App/containerapps
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|Replicas|Yes|Replica Count|Count|Maximum|Number of replicas count of container app|revisionName|
+|Requests|Yes|Requests|Count|Total|Requests processed|revisionName, podName, statusCodeCategory, statusCode|
+|RestartCount|Yes|Replica Restart Count|Count|Maximum|Restart count of container app replicas|revisionName, podName|
+|RxBytes|Yes|Network In Bytes|Bytes|Total|Network received bytes|revisionName, podName|
+|TxBytes|Yes|Network Out Bytes|Bytes|Total|Network transmitted bytes|revisionName, podName|
+|UsageNanoCores|Yes|CPU Usage|NanoCores|Average|CPU consumed by the container app, in nano cores. 1,000,000,000 nano cores = 1 core|revisionName, podName|
+|WorkingSetBytes|Yes|Memory Working Set Bytes|Bytes|Average|Container App working set memory used in bytes.|revisionName, podName|
++ ## Microsoft.AppConfiguration/configurationStores |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |APIRequestAuthentication|No|Authentication API Requests|Count|Count|Count of all requests against the Communication Services Authentication endpoint.|Operation, StatusCode, StatusCodeClass|
+|APIRequestCallRecording|Yes|Call Recording API Requests|Count|Count|Count of all requests against the Communication Services Call Recording endpoint.|Operation, StatusCode, StatusCodeClass|
|APIRequestChat|Yes|Chat API Requests|Count|Count|Count of all requests against the Communication Services Chat endpoint.|Operation, StatusCode, StatusCodeClass| |APIRequestNetworkTraversal|No|Network Traversal API Requests|Count|Count|Count of all requests against the Communication Services Network Traversal endpoint.|Operation, StatusCode, StatusCodeClass| |APIRequestSMS|Yes|SMS API Requests|Count|Count|Count of all requests against the Communication Services SMS endpoint.|Operation, StatusCode, StatusCodeClass, ErrorCode, NumberType|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Composite Disk Read Bytes/sec|No|Disk Read Bytes/sec(Preview)|Bytes|Average|Bytes/sec read from disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
-|Composite Disk Read Operations/sec|No|Disk Read Operations/sec(Preview)|Bytes|Average|Number of read IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
-|Composite Disk Write Bytes/sec|No|Disk Write Bytes/sec(Preview)|Bytes|Average|Bytes/sec written to disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
-|Composite Disk Write Operations/sec|No|Disk Write Operations/sec(Preview)|Bytes|Average|Number of Write IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|Composite Disk Read Bytes/sec|No|Disk Read Bytes/sec(Preview)|BytesPerSecond|Average|Bytes/sec read from disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|Composite Disk Read Operations/sec|No|Disk Read Operations/sec(Preview)|CountPerSecond|Average|Number of read IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|Composite Disk Write Bytes/sec|No|Disk Write Bytes/sec(Preview)|BytesPerSecond|Average|Bytes/sec written to disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|Composite Disk Write Operations/sec|No|Disk Write Operations/sec(Preview)|CountPerSecond|Average|Number of Write IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|DiskPaidBurstIOPS|No|Disk On-demand Burst Operations(Preview)|Count|Average|The accumulated operations of burst transactions used for disks with on-demand burst enabled. Emitted on an hour interval|No Dimensions|
## Microsoft.Compute/virtualMachines
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Cached IOPS Consumed Percentage|Yes|VM Cached IOPS Consumed Percentage|Percent|Average|Percentage of cached disk IOPS consumed by the VM|No Dimensions| |VM Uncached Bandwidth Consumed Percentage|Yes|VM Uncached Bandwidth Consumed Percentage|Percent|Average|Percentage of uncached disk bandwidth consumed by the VM|No Dimensions| |VM Uncached IOPS Consumed Percentage|Yes|VM Uncached IOPS Consumed Percentage|Percent|Average|Percentage of uncached disk IOPS consumed by the VM|No Dimensions|
+|VmAvailabilityMetric|Yes|VM Availability Metric (Preview)|Count|Average|Measure of Availability of Virtual machines over time. Note: This metric is previewed to only a small set of customers at the moment, as we prioritize improving data quality and consistency. As we improve our data standard, we will be rolling out this feature fleetwide in a phased manner.|No Dimensions|
-## Microsoft.Compute/virtualMachineScaleSets
+## Microsoft.Compute/virtualmachineScaleSets
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Cached IOPS Consumed Percentage|Yes|VM Cached IOPS Consumed Percentage|Percent|Average|Percentage of cached disk IOPS consumed by the VM|VMName| |VM Uncached Bandwidth Consumed Percentage|Yes|VM Uncached Bandwidth Consumed Percentage|Percent|Average|Percentage of uncached disk bandwidth consumed by the VM|VMName| |VM Uncached IOPS Consumed Percentage|Yes|VM Uncached IOPS Consumed Percentage|Percent|Average|Percentage of uncached disk IOPS consumed by the VM|VMName|
+|VmAvailabilityMetric|Yes|VM Availability Metric (Preview)|Count|Average|Measure of Availability of Virtual machines over time. Note: This metric is previewed to only a small set of customers at the moment, as we prioritize improving data quality and consistency. As we improve our data standard, we will be rolling out this feature fleetwide in a phased manner.|VMName|
## Microsoft.Compute/virtualMachineScaleSets/virtualMachines
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |egressbps|Yes|Egress Mbps|BitsPerSecond|Average|Egress Throughput|cachenodeid|
-|hitRatio|Yes|Hit Ratio|Percent|Average|Hit Ratio|cachenodeid|
+|hitRatio|Yes|Cache Efficiency|Percent|Average|Cache Efficiency|cachenodeid|
|hits|Yes|Hits|Count|Count|Count of hits|cachenodeid| |hitsbps|Yes|Hit Mbps|BitsPerSecond|Average|Hit Throughput|cachenodeid| |misses|Yes|Misses|Count|Count|Count of misses|cachenodeid|
This latest update adds a new column and reorders the metrics to be alphabetical
|WriteRequests|Yes|Write Requests|Count|Total|Count of data write requests to the account.|No Dimensions|
+## Microsoft.DataProtection/BackupVaults
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|BackupHealthEvent|Yes|Backup Health Events (preview)|Count|Count|The count of health events pertaining to backup job health|dataSourceURL, backupInstanceUrl, dataSourceType, healthStatus, backupInstanceName|
+|RestoreHealthEvent|Yes|Restore Health Events (preview)|Count|Count|The count of health events pertaining to restore job health|dataSourceURL, backupInstanceUrl, dataSourceType, healthStatus, backupInstanceName|
++ ## Microsoft.DataShare/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|DataUsage|No|Data Usage|Bytes|Total|Total data usage reported at 5 minutes granularity|CollectionName, DatabaseName, Region| |DedicatedGatewayAverageCPUUsage|No|DedicatedGatewayAverageCPUUsage|Percent|Average|Average CPU usage across dedicated gateway instances|Region, MetricType| |DedicatedGatewayAverageMemoryUsage|No|DedicatedGatewayAverageMemoryUsage|Bytes|Average|Average memory usage across dedicated gateway instances, which is used for both routing requests and caching data|Region|
+|DedicatedGatewayCPUUsage|No|DedicatedGatewayCPUUsage|Percent|Average|CPU usage across dedicated gateway instances|Region, ApplicationType|
|DedicatedGatewayMaximumCPUUsage|No|DedicatedGatewayMaximumCPUUsage|Percent|Average|Average Maximum CPU usage across dedicated gateway instances|Region, MetricType|
+|DedicatedGatewayMemoryUsage|No|DedicatedGatewayMemoryUsage|Bytes|Average|Memory usage across dedicated gateway instances|Region, ApplicationType|
|DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region, CacheHit| |DeleteAccount|Yes|Account Deleted|Count|Count|Account Deleted|No Dimensions| |DocumentCount|No|Document Count|Count|Total|Total document count reported at 5 minutes, 1 hour and 1 day granularity|CollectionName, DatabaseName, Region|
This latest update adds a new column and reorders the metrics to be alphabetical
|NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId, CollectionRid| |OfflineRegion|No|Region Offlined|Count|Count|Region Offlined|Region, StatusCode, Role, OperationName| |OnlineRegion|No|Region Onlined|Count|Count|Region Onlined|Region, StatusCode, Role, OperationName|
+|PhysicalPartitionThroughputInfo|No|Physical Partition Throughput|Count|Maximum|Physical Partition Throughput|CollectionName, DatabaseName, PhysicalPartitionId, OfferOwnerRid, Region|
|ProvisionedThroughput|No|Provisioned Throughput|Count|Maximum|Provisioned Throughput|DatabaseName, CollectionName| |RegionFailover|Yes|Region Failed Over|Count|Count|Region Failed Over|No Dimensions| |RemoveRegion|Yes|Region Removed|Count|Count|Region Removed|Region|
This latest update adds a new column and reorders the metrics to be alphabetical
|TableTableThroughputUpdate|No|AzureTable Table Throughput Updated|Count|Count|AzureTable Table Throughput Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest| |TableTableUpdate|No|AzureTable Table Updated|Count|Count|AzureTable Table Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType| |TotalRequests|Yes|Total Requests|Count|Count|Number of requests made|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status|
+|TotalRequestsPreview|No|Total Requests (Preview)|Count|Count|Number of requests made with CapacityType|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType|
|TotalRequestUnits|Yes|Total Request Units|Count|Total|Request Units consumed|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status|
+|TotalRequestUnitsPreview|No|Total Request Units (Preview)|Count|Total|Request Units consumed with CapacityType|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType|
|UpdateAccountKeys|Yes|Account Keys Updated|Count|Count|Account Keys Updated|KeyType| |UpdateAccountNetworkSettings|Yes|Account Network Settings Updated|Count|Count|Account Network Settings Updated|No Dimensions| |UpdateAccountReplicationSettings|Yes|Account Replication Settings Updated|Count|Count|Account Replication Settings Updated|No Dimensions| |UpdateDiagnosticsSettings|No|Account Diagnostic Settings Updated|Count|Count|Account Diagnostic Settings Updated|DiagnosticSettingsName, ResourceGroupName|
+## microsoft.edgezones/edgezones
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|TotalVcoreCapacity|Yes|Total VCore Capacity|Count|Average|The total capacity of the General-Purpose Compute vcore in Edge Zone Enterprise site. |No Dimensions|
+|VcoresUsage|Yes|Vcore Usage Percentage|Percent|Average|The utilization of the General-Purpose Compute vcores in Edge Zone Enterprise site |No Dimensions|
++ ## Microsoft.EventGrid/domains |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ActiveConnections|No|ActiveConnections|Count|Maximum|Total Active Connections for Microsoft.EventHub.|No Dimensions|
+|ActiveConnections|No|ActiveConnections|Count|Average|Total Active Connections for Microsoft.EventHub.|No Dimensions|
|AvailableMemory|No|Available Memory|Percent|Maximum|Available memory for the Event Hub Cluster as a percentage of total memory.|Role| |CaptureBacklog|No|Capture Backlog.|Count|Total|Capture Backlog for Microsoft.EventHub.|No Dimensions| |CapturedBytes|No|Captured Bytes.|Bytes|Total|Captured Bytes for Microsoft.EventHub.|No Dimensions| |CapturedMessages|No|Captured Messages.|Count|Total|Captured Messages for Microsoft.EventHub.|No Dimensions|
-|ConnectionsClosed|No|Connections Closed.|Count|Maximum|Connections Closed for Microsoft.EventHub.|No Dimensions|
-|ConnectionsOpened|No|Connections Opened.|Count|Maximum|Connections Opened for Microsoft.EventHub.|No Dimensions|
+|ConnectionsClosed|No|Connections Closed.|Count|Average|Connections Closed for Microsoft.EventHub.|No Dimensions|
+|ConnectionsOpened|No|Connections Opened.|Count|Average|Connections Opened for Microsoft.EventHub.|No Dimensions|
|CPU|No|CPU|Percent|Maximum|CPU utilization for the Event Hub Cluster as a percentage|Role| |IncomingBytes|Yes|Incoming Bytes.|Bytes|Total|Incoming Bytes for Microsoft.EventHub.|No Dimensions| |IncomingMessages|Yes|Incoming Messages|Count|Total|Incoming Messages for Microsoft.EventHub.|No Dimensions| |IncomingRequests|Yes|Incoming Requests|Count|Total|Incoming Requests for Microsoft.EventHub.|No Dimensions| |OutgoingBytes|Yes|Outgoing Bytes.|Bytes|Total|Outgoing Bytes for Microsoft.EventHub.|No Dimensions| |OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.EventHub.|No Dimensions|
-|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|No Dimensions|
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|No Dimensions|
+|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|OperationResult|
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|OperationResult|
|Size|No|Size|Bytes|Average|Size of an EventHub in Bytes.|Role|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|No Dimensions|
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|No Dimensions|
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|No Dimensions|
+|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|OperationResult|
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|OperationResult|
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|OperationResult|
## Microsoft.EventHub/Namespaces
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|HyperVVirtualProcessorUtilization|Yes|Average CPU Utilization|Percent|Average|Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU.|InstanceName
+|HyperVVirtualProcessorUtilization|Yes|Average CPU Utilization|Percent|Average|Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU.|InstanceName|
## microsoft.insights/autoscalesettings
This latest update adds a new column and reorders the metrics to be alphabetical
|capacity_cpu_cores|Yes|Total number of cpu cores in a connected cluster|Count|Total|Total number of cpu cores in a connected cluster|No Dimensions|
-## Microsoft.Kusto/Clusters
+## Microsoft.Kusto/clusters
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|MaterializedViewHealth|Yes|Materialized View Health|Count|Average|The health of the materialized view (1 for healthy, 0 for non-healthy)|Database, MaterializedViewName| |MaterializedViewRecordsInDelta|Yes|Materialized View Records In Delta|Count|Average|The number of records in the non-materialized part of the view|Database, MaterializedViewName| |MaterializedViewResult|Yes|Materialized View Result|Count|Average|The result of the materialization process|Database, MaterializedViewName, Result|
-|QueryDuration|Yes|Query duration|Milliseconds|Average|Queries' duration in seconds|QueryStatus|
+|QueryDuration|Yes|Query duration|MilliSeconds|Average|Queries' duration in seconds|QueryStatus|
|QueryResult|No|Query Result|Count|Count|Total number of queries.|QueryStatus| |QueueLength|Yes|Queue Length|Count|Average|Number of pending messages in a component's queue.|ComponentType| |QueueOldestMessage|Yes|Queue Oldest Message|Count|Average|Time in seconds from when the oldest message in queue was inserted.|ComponentType| |ReceivedDataSizeBytes|Yes|Received Data Size Bytes|Bytes|Average|Size of data received by data connection. This is the size of the data stream, or of raw data size if provided.|ComponentType, ComponentName| |StageLatency|Yes|Stage Latency|Seconds|Average|Cumulative time from when a message is discovered until it is received by the reporting component for processing (discovery time is set when message is enqueued for ingestion queue, or when discovered by data connection).|Database, ComponentType|
-|SteamingIngestRequestRate|Yes|Streaming Ingest Request Rate|Count|RateRequestsPerSecond|Streaming ingest request rate (requests per second)|No Dimensions|
|StreamingIngestDataRate|Yes|Streaming Ingest Data Rate|Count|Average|Streaming ingest data rate (MB per second)|No Dimensions|
-|StreamingIngestDuration|Yes|Streaming Ingest Duration|Milliseconds|Average|Streaming ingest duration in milliseconds|No Dimensions|
+|StreamingIngestDuration|Yes|Streaming Ingest Duration|MilliSeconds|Average|Streaming ingest duration in milliseconds|No Dimensions|
|StreamingIngestResults|Yes|Streaming Ingest Result|Count|Count|Streaming ingest result|Result| |TotalNumberOfConcurrentQueries|Yes|Total number of concurrent queries|Count|Maximum|Total number of concurrent queries|No Dimensions| |TotalNumberOfExtents|Yes|Total number of extents|Count|Average|Total number of data extents|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|WeakConsistencyLatency|Yes|Weak consistency latency|Seconds|Average|The max latency between the previous metadata sync and the next one (in DB/node scope)|Database, RoleInstance|
-## Microsoft.Logic/integrationServiceEnvironments
+## Microsoft.Logic/IntegrationServiceEnvironments
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|ActionsStarted|Yes|Actions Started |Count|Total|Number of workflow actions started.|No Dimensions| |ActionsSucceeded|Yes|Actions Succeeded |Count|Total|Number of workflow actions succeeded.|No Dimensions| |ActionSuccessLatency|Yes|Action Success Latency |Seconds|Average|Latency of succeeded workflow actions.|No Dimensions|
-|ActionThrottledEvents|Yes|Action Throttled Events|Count|Total|Number of workflow action throttled events..|No Dimensions|
|IntegrationServiceEnvironmentConnectorMemoryUsage|Yes|Connector Memory Usage for Integration Service Environment|Percent|Average|Connector memory usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentConnectorProcessorUsage|Yes|Connector Processor Usage for Integration Service Environment|Percent|Average|Connector processor usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentWorkflowMemoryUsage|Yes|Workflow Memory Usage for Integration Service Environment|Percent|Average|Workflow memory usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentWorkflowProcessorUsage|Yes|Workflow Processor Usage for Integration Service Environment|Percent|Average|Workflow processor usage for integration service environment.|No Dimensions|
-|RunFailurePercentage|Yes|Run Failure Percentage|Percent|Total|Percentage of workflow runs failed.|No Dimensions|
|RunLatency|Yes|Run Latency|Seconds|Average|Latency of completed workflow runs.|No Dimensions| |RunsCancelled|Yes|Runs Cancelled|Count|Total|Number of workflow runs cancelled.|No Dimensions| |RunsCompleted|Yes|Runs Completed|Count|Total|Number of workflow runs completed.|No Dimensions| |RunsFailed|Yes|Runs Failed|Count|Total|Number of workflow runs failed.|No Dimensions| |RunsStarted|Yes|Runs Started|Count|Total|Number of workflow runs started.|No Dimensions| |RunsSucceeded|Yes|Runs Succeeded|Count|Total|Number of workflow runs succeeded.|No Dimensions|
-|RunStartThrottledEvents|Yes|Run Start Throttled Events|Count|Total|Number of workflow run start throttled events.|No Dimensions|
|RunSuccessLatency|Yes|Run Success Latency|Seconds|Average|Latency of succeeded workflow runs.|No Dimensions|
-|RunThrottledEvents|Yes|Run Throttled Events|Count|Total|Number of workflow action or trigger throttled events.|No Dimensions|
|TriggerFireLatency|Yes|Trigger Fire Latency |Seconds|Average|Latency of fired workflow triggers.|No Dimensions| |TriggerLatency|Yes|Trigger Latency |Seconds|Average|Latency of completed workflow triggers.|No Dimensions| |TriggersCompleted|Yes|Triggers Completed |Count|Total|Number of workflow triggers completed.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TriggersStarted|Yes|Triggers Started |Count|Total|Number of workflow triggers started.|No Dimensions| |TriggersSucceeded|Yes|Triggers Succeeded |Count|Total|Number of workflow triggers succeeded.|No Dimensions| |TriggerSuccessLatency|Yes|Trigger Success Latency |Seconds|Average|Latency of succeeded workflow triggers.|No Dimensions|
-|TriggerThrottledEvents|Yes|Trigger Throttled Events|Count|Total|Number of workflow trigger throttled events.|No Dimensions|
## Microsoft.Logic/Workflows
This latest update adds a new column and reorders the metrics to be alphabetical
|CapacityUnits|No|Current Capacity Units|Count|Average|Capacity Units consumed|No Dimensions| |ClientRtt|No|Client RTT|MilliSeconds|Average|Average round trip time between clients and Application Gateway. This metric indicates how long it takes to establish connections and return acknowledgements|Listener| |ComputeUnits|No|Current Compute Units|Count|Average|Compute Units consumed|No Dimensions|
+|ConnectionLifetime|No|Connection Lifetime|MilliSeconds|Average|Average time duration from the start of a new connection to its termination|Listener|
|CpuUtilization|No|CPU Utilization|Percent|Average|Current CPU utilization of the Application Gateway|No Dimensions| |CurrentConnections|Yes|Current Connections|Count|Total|Count of current connections established with Application Gateway|No Dimensions| |EstimatedBilledCapacityUnits|No|Estimated Billed Capacity Units|Count|Average|Estimated capacity units that will be charged|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|BitsOutPerSecond|Yes|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|No Dimensions|
+## Microsoft.Network/dnsForwardingRulesets
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|ForwardingRuleCount|Yes|Forwarding Rule Count|Count|Maximum|This metric indicates the number of forwarding rules present in each DNS forwarding ruleset.|No Dimensions|
+|VirtualNetworkLinkCount|Yes|Virtual Network Link Count|Count|Maximum|This metric indicates the number of associated virtual network links to a DNS forwarding ruleset.|No Dimensions|
++
+## Microsoft.Network/dnsResolvers
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|InboundEndpointCount|Yes|Inbound Endpoint Count|Count|Maximum|This metric indicates the number of inbound endpoints created for a DNS Resolver.|No Dimensions|
+|OutboundEndpointCount|Yes|Outbound Endpoint Count|Count|Maximum|This metric indicates the number of outbound endpoints created for a DNS Resolver.|No Dimensions|
++ ## Microsoft.Network/dnszones |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|BitsOutPerSecond|Yes|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|No Dimensions|
-## Microsoft.Network/expressRouteGateways
+## microsoft.network/expressroutegateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ErGatewayConnectionBitsInPerSecond|No|BitsInPerSecond|BitsPerSecond|Average|Bits ingressing Azure per second|ConnectionName|
-|ErGatewayConnectionBitsOutPerSecond|No|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|ConnectionName|
-|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
+|ErGatewayConnectionBitsInPerSecond|No|Bits In Per Second|BitsPerSecond|Average|Bits per second ingressing Azure via ExpressRoute Gateway which can be further split for specific connections|ConnectionName|
+|ErGatewayConnectionBitsOutPerSecond|No|Bits Out Per Second|BitsPerSecond|Average|Bits per second egressing Azure via ExpressRoute Gateway which can be further split for specific connections|ConnectionName|
+|ExpressRouteGatewayBitsPerSecond|No|Bits Received Per second|BitsPerSecond|Average|Total Bits received on ExpressRoute Gateway per second|roleInstance|
+|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRoute Gateway|roleInstance|
|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Percent|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance| |ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance| |ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
-|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayPacketsPerSecond|No|Packets received per second|CountPerSecond|Average|Total Packets received on ExpressRoute Gateway per second|roleInstance|
## Microsoft.Network/expressRoutePorts
This latest update adds a new column and reorders the metrics to be alphabetical
|BgpPeerStatus|No|Bgp Peer Status|Count|Maximum|1 - Connected, 0 - Not connected|routeserviceinstance, bgppeerip, bgppeertype| |CountOfRoutesAdvertisedToPeer|No|Count Of Routes Advertised To Peer|Count|Maximum|Total number of routes advertised to peer|routeserviceinstance, bgppeerip, bgppeertype| |CountOfRoutesLearnedFromPeer|No|Count Of Routes Learned From Peer|Count|Maximum|Total number of routes learned from peer|routeserviceinstance, bgppeerip, bgppeertype|
+|VirtualHubDataProcessed|No|Data Processed by the Virtual Hub Router|Bytes|Total|Data Processed by the Virtual Hub Router|No Dimensions|
## microsoft.network/virtualnetworkgateways
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processes|Yes|Processes|Count|Average|Average_Processes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Users|Yes|Users|Count|Average|Average_Users. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Event|Yes|Event|Count|Average|Event. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
-|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, OSType, Version, SourceComputerId|
-|Update|Yes|Update|Count|Average|Update. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, Product, Classification, UpdateState, Optional, Approved|
+|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processes|Yes|Processes|Count|Average|Average_Processes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Users|Yes|Users|Count|Average|Average_Users. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Event|Yes|Event|Count|Average|Event. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
+|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, OSType, Version, SourceComputerId|
+|Update|Yes|Update|Count|Average|Update. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, Product, Classification, UpdateState, Optional, Approved|
## Microsoft.Peering/peerings
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|CleanerCurrentPrice|Yes|Memory: Cleaner Current Price|Count|Average|Current price of memory, $/byte/time, normalized to 1000.|No Dimensions|
-|CleanerMemoryNonshrinkable|Yes|Memory: Cleaner Memory nonshrinkable|Bytes|Average|Amount of memory, in bytes, not subject to purging by the background cleaner.|No Dimensions|
-|CleanerMemoryShrinkable|Yes|Memory: Cleaner Memory shrinkable|Bytes|Average|Amount of memory, in bytes, subject to purging by the background cleaner.|No Dimensions|
-|CommandPoolBusyThreads|Yes|Threads: Command pool busy threads|Count|Average|Number of busy threads in the command thread pool.|No Dimensions|
-|CommandPoolIdleThreads|Yes|Threads: Command pool idle threads|Count|Average|Number of idle threads in the command thread pool.|No Dimensions|
-|CommandPoolJobQueueLength|Yes|Command Pool Job Queue Length|Count|Average|Number of jobs in the queue of the command thread pool.|No Dimensions|
|cpu_metric|Yes|CPU (Gen2)|Percent|Average|CPU Utilization. Supported only for Power BI Embedded Generation 2 resources.|No Dimensions| |cpu_workload_metric|Yes|CPU Per Workload (Gen2)|Percent|Average|CPU Utilization Per Workload. Supported only for Power BI Embedded Generation 2 resources.|Workload|
-|CurrentConnections|Yes|Connection: Current connections|Count|Average|Current number of client connections established.|No Dimensions|
-|CurrentUserSessions|Yes|Current User Sessions|Count|Average|Current number of user sessions established.|No Dimensions|
-|LongParsingBusyThreads|Yes|Threads: Long parsing busy threads|Count|Average|Number of busy threads in the long parsing thread pool.|No Dimensions|
-|LongParsingIdleThreads|Yes|Threads: Long parsing idle threads|Count|Average|Number of idle threads in the long parsing thread pool.|No Dimensions|
-|LongParsingJobQueueLength|Yes|Threads: Long parsing job queue length|Count|Average|Number of jobs in the queue of the long parsing thread pool.|No Dimensions|
-|memory_metric|Yes|Memory (Gen1)|Bytes|Average|Memory. Range 0-3 GB for A1, 0-5 GB for A2, 0-10 GB for A3, 0-25 GB for A4, 0-50 GB for A5 and 0-100 GB for A6. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|memory_thrashing_metric|Yes|Memory Thrashing (Datasets) (Gen1)|Percent|Average|Average memory thrashing. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|MemoryLimitHard|Yes|Memory: Memory Limit Hard|Bytes|Average|Hard memory limit, from configuration file.|No Dimensions|
-|MemoryLimitHigh|Yes|Memory: Memory Limit High|Bytes|Average|High memory limit, from configuration file.|No Dimensions|
-|MemoryLimitLow|Yes|Memory: Memory Limit Low|Bytes|Average|Low memory limit, from configuration file.|No Dimensions|
-|MemoryLimitVertiPaq|Yes|Memory: Memory Limit VertiPaq|Bytes|Average|In-memory limit, from configuration file.|No Dimensions|
-|MemoryUsage|Yes|Memory: Memory Usage|Bytes|Average|Memory usage of the server process as used in calculating cleaner memory price. Equal to counter Process\PrivateBytes plus the size of memory-mapped data, ignoring any memory which was mapped or allocated by the xVelocity in-memory analytics engine (VertiPaq) in excess of the xVelocity engine Memory Limit.|No Dimensions|
|overload_metric|Yes|Overload (Gen2)|Count|Average|Resource Overload, 1 if resource is overloaded, otherwise 0. Supported only for Power BI Embedded Generation 2 resources.|No Dimensions|
-|ProcessingPoolBusyIOJobThreads|Yes|Threads: Processing pool busy I/O job threads|Count|Average|Number of threads running I/O jobs in the processing thread pool.|No Dimensions|
-|ProcessingPoolBusyNonIOThreads|Yes|Threads: Processing pool busy non-I/O threads|Count|Average|Number of threads running non-I/O jobs in the processing thread pool.|No Dimensions|
-|ProcessingPoolIdleIOJobThreads|Yes|Threads: Processing pool idle I/O job threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|No Dimensions|
-|ProcessingPoolIdleNonIOThreads|Yes|Threads: Processing pool idle non-I/O threads|Count|Average|Number of idle threads in the processing thread pool dedicated to non-I/O jobs.|No Dimensions|
-|ProcessingPoolIOJobQueueLength|Yes|Threads: Processing pool I/O job queue length|Count|Average|Number of I/O jobs in the queue of the processing thread pool.|No Dimensions|
-|ProcessingPoolJobQueueLength|Yes|Processing Pool Job Queue Length|Count|Average|Number of non-I/O jobs in the queue of the processing thread pool.|No Dimensions|
-|qpu_high_utilization_metric|Yes|QPU High Utilization (Gen1)|Count|Total|QPU High Utilization In Last Minute, 1 For High QPU Utilization, Otherwise 0. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|qpu_metric|Yes|QPU (Gen1)|Count|Average|QPU. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|QueryDuration|Yes|Query Duration (Datasets) (Gen1)|Milliseconds|Average|DAX Query duration in last interval. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|QueryPoolBusyThreads|Yes|Query Pool Busy Threads|Count|Average|Number of busy threads in the query thread pool.|No Dimensions|
-|QueryPoolIdleThreads|Yes|Threads: Query pool idle threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|No Dimensions|
-|QueryPoolJobQueueLength|Yes|Query Pool Job Queue Length (Datasets) (Gen1)|Count|Average|Number of jobs in the queue of the query thread pool. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|Quota|Yes|Memory: Quota|Bytes|Average|Current memory quota, in bytes. Memory quota is also known as a memory grant or memory reservation.|No Dimensions|
-|QuotaBlocked|Yes|Memory: Quota Blocked|Count|Average|Current number of quota requests that are blocked until other memory quotas are freed.|No Dimensions|
-|RowsConvertedPerSec|Yes|Processing: Rows converted per sec|CountPerSecond|Average|Rate of rows converted during processing.|No Dimensions|
-|RowsReadPerSec|Yes|Processing: Rows read per sec|CountPerSecond|Average|Rate of rows read from all relational databases.|No Dimensions|
-|RowsWrittenPerSec|Yes|Processing: Rows written per sec|CountPerSecond|Average|Rate of rows written during processing.|No Dimensions|
-|ShortParsingBusyThreads|Yes|Threads: Short parsing busy threads|Count|Average|Number of busy threads in the short parsing thread pool.|No Dimensions|
-|ShortParsingIdleThreads|Yes|Threads: Short parsing idle threads|Count|Average|Number of idle threads in the short parsing thread pool.|No Dimensions|
-|ShortParsingJobQueueLength|Yes|Threads: Short parsing job queue length|Count|Average|Number of jobs in the queue of the short parsing thread pool.|No Dimensions|
-|SuccessfullConnectionsPerSec|Yes|Successful Connections Per Sec|CountPerSecond|Average|Rate of successful connection completions.|No Dimensions|
-|TotalConnectionFailures|Yes|Total Connection Failures|Count|Average|Total failed connection attempts.|No Dimensions|
-|TotalConnectionRequests|Yes|Total Connection Requests|Count|Average|Total connection requests. These are arrivals.|No Dimensions|
-|VertiPaqNonpaged|Yes|Memory: VertiPaq Nonpaged|Bytes|Average|Bytes of memory locked in the working set for use by the in-memory engine.|No Dimensions|
-|VertiPaqPaged|Yes|Memory: VertiPaq Paged|Bytes|Average|Bytes of paged memory in use for in-memory data.|No Dimensions|
-|workload_memory_metric|Yes|Memory Per Workload (Gen1)|Bytes|Average|Memory Per Workload. Supported only for Power BI Embedded Generation 1 resources.|Workload|
-|workload_qpu_metric|Yes|QPU Per Workload (Gen1)|Count|Average|QPU Per Workload. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|Workload|
## microsoft.purview/accounts
This latest update adds a new column and reorders the metrics to be alphabetical
|ThrottledSearchQueriesPercentage|Yes|Throttled search queries percentage|Percent|Average|Percentage of search queries that were throttled for the search service|No Dimensions|
+## microsoft.securitydetonation/chambers
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|CapacityUtilization|No|Capacity Utilization|Percent|Maximum|The percentage of the allocated capacity the resource is actively using.|Region|
+|CpuUtilization|No|CPU Utilization|Percent|Average|The percentage of the CPU that is being utilized across the resource.|Region|
+|CreateSubmissionApiResult|No|CreateSubmission Api Results|Count|Count|The total number of CreateSubmission API requests, with return code.|OperationName, ServiceTypeName, Region, HttpReturnCode|
+|PercentFreeDiskSpace|No|Available Disk Space|Percent|Average|The percent amount of available disk space across the resource.|Region|
+|SubmissionDuration|No|Submission Duration|MilliSeconds|Maximum|The submission duration (processing time), from creation to completion.|Region|
+|SubmissionsCompleted|No|Completed Submissions / Hr|Count|Maximum|The number of completed submissions / Hr.|Region|
+|SubmissionsFailed|No|Failed Submissions / Hr|Count|Maximum|The number of failed submissions / Hr.|Region|
+|SubmissionsOutstanding|No|Outstanding Submissions|Count|Average|The average number of outstanding submissions that are queued for processing.|Region|
+|SubmissionsSucceeded|No|Successful Submissions / Hr|Count|Maximum|The number of successful submissions / Hr.|Region|
++ ## Microsoft.ServiceBus/Namespaces |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions| |MessageCount|Yes|Message Count|Count|Total|The total amount of messages.|No Dimensions| |OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions|
+|ServerLoad|No|Server Load|Percent|Maximum|SignalR server load.|No Dimensions|
|SystemErrors|Yes|System Errors|Percent|Maximum|The percentage of system errors|No Dimensions| |UserErrors|Yes|User Errors|Percent|Maximum|The percentage of user errors|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|ConnectionQuotaUtilization|Yes|Connection Quota Utilization|Percent|Maximum|The percentage of connection connected relative to connection quota.|No Dimensions| |InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The traffic originating from outside to inside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions| |OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The traffic originating from inside to outside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
+|ServerLoad|No|Server Load|Percent|Maximum|SignalR server load.|No Dimensions|
|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The number of user connections established to the service. It is aggregated by adding all the online connections.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|cpu_percent|Yes|CPU percentage|Percent|Average|CPU percentage|No Dimensions| |cpu_used|Yes|CPU used|Count|Average|CPU used. Applies to vCore-based databases.|No Dimensions| |deadlock|Yes|Deadlocks|Count|Total|Deadlocks. Not applicable to data warehouses.|No Dimensions|
+|delta_num_of_bytes_read|Yes|Remote data reads|Bytes|Total|Remote data reads in bytes|No Dimensions|
+|delta_num_of_bytes_total|Yes|Total remote bytes read and written|Bytes|Total|Total remote bytes read and written by compute|No Dimensions|
+|delta_num_of_bytes_written|Yes|Remote log writes|Bytes|Total|Remote log writes in bytes|No Dimensions|
|diff_backup_size_bytes|Yes|Differential backup storage size|Bytes|Maximum|Cumulative differential backup storage size. Applies to vCore-based databases. Not applicable to Hyperscale databases.|No Dimensions| |dtu_consumption_percent|Yes|DTU percentage|Percent|Average|DTU Percentage. Applies to DTU-based databases.|No Dimensions| |dtu_limit|Yes|DTU Limit|Count|Average|DTU Limit. Applies to DTU-based databases.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication| |Egress|Yes|Egress|Bytes|Total|The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication|
-|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
-|SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
+|SuccessE2ELatency|Yes|Success E2E Latency|MilliSeconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
+|SuccessServerLatency|Yes|Success Server Latency|MilliSeconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication, TransactionType|
|UsedCapacity|Yes|Used capacity|Bytes|Average|The amount of storage used by the storage account. For standard storage accounts, it's the sum of capacity used by blob, table, file, and queue. For premium storage accounts and Blob storage accounts, it is the same as BlobCapacity or FileCapacity.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ApiConnectionRequests|Yes|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
+|Requests|No|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
++
+## Microsoft.Web/containerapps
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|Replicas|Yes|Replica Count|Count|Maximum|Number of replicas count of container app|revisionName, deploymentName|
+|Requests|Yes|Requests|Count|Total|Requests processed|revisionName, podName, statusCodeCategory, statusCode|
+|RestartCount|Yes|Replica Restart Count|Count|Maximum|Restart count of container app replicas|revisionName, podName|
+|RxBytes|Yes|Network In Bytes|Bytes|Total|Network received bytes|revisionName, podName|
+|TxBytes|Yes|Network Out Bytes|Bytes|Total|Network transmitted bytes|revisionName, podName|
+|UsageNanoCores|Yes|CPU Usage Nanocores|NanoCores|Average|CPU consumed by the container app, in nano cores. 1,000,000,000 nano cores = 1 core|revisionName, podName|
+|WorkingSetBytes|Yes|Memory Working Set Bytes|Bytes|Average|Container App working set memory used in bytes.|revisionName, podName|
## Microsoft.Web/hostingEnvironments
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 04/12/2022 Last updated : 06/01/2022
If you think something is missing, you can open a GitHub comment at the bottom o
|PrivilegeUse|PrivilegeUse|No| |SystemSecurity|SystemSecurity|No| + ## microsoft.aadiam/tenants |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|AuditEvent message log category.|No|
-|AuditEvent|AuditEvent message log category.|No|
-|ERR|Error message log category.|No|
-|ERR|Error message log category.|No|
-|INF|Informational message log category.|No|
-|INF|Informational message log category.|No|
|NotProcessed|Requests which could not be processed.|Yes| |Operational|Operational message log category.|Yes|
-|WRN|Warning message log category.|Yes|
-|WRN|Warning message log category.|No|
## Microsoft.Automation/automationAccounts
If you think something is missing, you can open a GitHub comment at the bottom o
|CallDiagnostics|Call Diagnostics Logs|Yes| |CallSummary|Call Summary Logs|Yes| |ChatOperational|Operational Chat Logs|No|
+|EmailSendMailOperational|Email Service Send Mail Logs|Yes|
+|EmailStatusUpdateOperational|Email Service Delivery Status Update Logs|Yes|
+|EmailUserEngagementOperational|Email Service User Engagement Logs|Yes|
+|NetworkTraversalDiagnostics|Network Traversal Relay Diagnostic Logs|Yes|
|NetworkTraversalOperational|Operational Network Traversal Logs|Yes| |SMSOperational|Operational SMS Logs|No| |Usage|Usage Records|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|AgentHealthStatus|AgentHealthStatus|No|
|AgentHealthStatus|AgentHealthStatus|Yes| |Checkpoint|Checkpoint|Yes|
-|Checkpoint|Checkpoint|No|
-|Connection|Connection|No|
|Connection|Connection|Yes| |Error|Error|Yes|
-|Error|Error|No|
-|HostRegistration|HostRegistration|No|
|HostRegistration|HostRegistration|Yes| |Management|Management|Yes|
-|Management|Management|No|
|NetworkData|Network Data Logs|Yes| |SessionHostManagement|Session Host Management Activity Logs|Yes|
If you think something is missing, you can open a GitHub comment at the bottom o
|AzurePolicyEvaluationDetails|Azure Policy Evaluation Details|Yes|
-## Microsoft.Kusto/Clusters
+## Microsoft.Kusto/clusters
|Category|Category Display Name|Costs To Export| |||| |Command|Command|No|
-|FailedIngestion|Failed ingest operations|No|
+|FailedIngestion|Failed ingestion|No|
|IngestionBatching|Ingestion batching|No| |Journal|Journal|Yes| |Query|Query|No|
-|SucceededIngestion|Successful ingest operations|No|
+|SucceededIngestion|Succeeded ingestion|No|
|TableDetails|Table details|No| |TableUsageStatistics|Table usage statistics|No|
-## Microsoft.Logic/integrationAccounts
+## microsoft.loadtestservice/loadtests
|Category|Category Display Name|Costs To Export| ||||
-|IntegrationAccountTrackingEvents|Integration Account track events|No|
+|OperationLogs|Azure Load Testing Operations|Yes|
## Microsoft.Logic/IntegrationAccounts
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AmlComputeClusterEvent|AmlComputeClusterEvent|No|
-|AmlComputeClusterEvent|AmlComputeClusterEvent|No|
|AmlComputeClusterNodeEvent|AmlComputeClusterNodeEvent|No| |AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No|
-|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No|
|AmlComputeJobEvent|AmlComputeJobEvent|No|
-|AmlComputeJobEvent|AmlComputeJobEvent|No|
-|AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
|AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No| |ComputeInstanceEvent|ComputeInstanceEvent|Yes| |DataLabelChangeEvent|DataLabelChangeEvent|Yes|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|NSPInboundAccessAllowed|NSP Inbound Access Allowed.|Yes|
-|NSPInboundAccessDenied|NSP Inbound Access Denied.|Yes|
-|NSPOutboundAccessAllowed|NSP Outbound Access Allowed.|Yes|
-|NSPOutboundAccessDenied|NSP Outbound Access Denied.|Yes|
-|NSPOutboundAttempt|NSP Outbound Attempted.|Yes|
-|PrivateEndPointTraffic|Private Endpoint Traffic|Yes|
-|ResourceInboundAccessAllowed|Resource Inbound Access Allowed.|Yes|
-|ResourceInboundAccessDenied|Resource Inbound Access Denied|Yes|
-|ResourceOutboundAccessAllowed|Resource Outbound Access Allowed|Yes|
-|ResourceOutboundAccessDenied|Resource Outbound Access Denied|Yes|
+|NspIntraPerimeterInboundAllowed|Inbound access allowed within same perimeter.|Yes|
+|NspIntraPerimeterOutboundAllowed|Outbound attempted to same perimeter.|Yes|
+|NspPrivateInboundAllowed|Private endpoint traffic allowed.|Yes|
+|NspPublicInboundPerimeterRulesAllowed|Public inbound access allowed by NSP access rules.|Yes|
+|NspPublicInboundPerimeterRulesDenied|Public inbound access denied by NSP access rules.|Yes|
+|NspPublicInboundResourceRulesAllowed|Public inbound access allowed by PaaS resource rules.|Yes|
+|NspPublicInboundResourceRulesDenied|Public inbound access denied by PaaS resource rules.|Yes|
+|NspPublicOutboundPerimeterRulesAllowed|Public outbound access allowed by NSP access rules.|Yes|
+|NspPublicOutboundPerimeterRulesDenied|Public outbound access denied by NSP access rules.|Yes|
+|NspPublicOutboundResourceRulesAllowed|Public outbound access allowed by PaaS resource rules.|Yes|
+|NspPublicOutboundResourceRulesDenied|Public outbound access denied by PaaS resource rules|Yes|
## microsoft.network/p2svpngateways
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AirFlowTaskLogs|Air Flow Task Logs|Yes|
+|ElasticOperatorLogs|Elastic Operator Logs|Yes|
+|ElasticsearchLogs|Elasticsearch Logs|Yes|
## Microsoft.OpenLogisticsPlatform/Workspaces
If you think something is missing, you can open a GitHub comment at the bottom o
|OperationLogs|Operation Logs|No|
+## Microsoft.Security/antiMalwareSettings
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ScanResults|AntimalwareScanResults|Yes|
++ ## microsoft.securityinsights/settings |Category|Category Display Name|Costs To Export|
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-collector-release-notes.md
+
+ Title: Release Notes for Microsoft.ApplicationInsights.SnapshotCollector NuGet package - Application Insights
+description: Release notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package used by the Application Insights Snapshot Debugger.
+ Last updated : 11/10/2020+++
+# Release notes for Microsoft.ApplicationInsights.SnapshotCollector
+
+This article contains the releases notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package for .NET applications, which is used by the Application Insights Snapshot Debugger.
+
+[Learn](./snapshot-debugger.md) more about the Application Insights Snapshot Debugger for .NET applications.
+
+For bug reports and feedback, open an issue on GitHub at https://github.com/microsoft/ApplicationInsights-SnapshotCollector
++
+## Release notes
+
+## [1.4.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.3)
+A point release to address user-reported bugs.
+### Bug fixes
+- Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17)
+- Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19)
+<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](./snapshot-debugger-troubleshoot.md#not-supported-scenarios)
+
+## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2)
+A point release to address a user-reported bug.
+### Bug fixes
+- Fix [ArgumentException: Delegates must be of the same type.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/16)
+
+## [1.4.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.1)
+A point release to revert a breaking change introduced in 1.4.0.
+### Bug fixes
+- Fix [Method not found in WebJobs](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/15)
+
+## [1.4.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.0)
+Address multiple improvements and added support for Azure Active Directory (AAD) authentication for Application Insights ingestion.
+### Changes
+- Snapshot Collector package size reduced by 60%. From 10.34 MB to 4.11 MB.
+- Target netstandard2.0 only in Snapshot Collector.
+- Bump Application Insights SDK dependency to 2.15.0.
+- Add back MinidumpWithThreadInfo when writing dumps.
+- Add CompatibilityVersion to improve synchronization between Snapshot Collector agent and uploader on breaking changes.
+- Change SnapshotUploader LogFile naming algorithm to avoid excessive file I/O in App Service.
+- Add pid, role name, and process start time to uploaded blob metadata.
+- Use System.Diagnostics.Process where possible in Snapshot Collector and Snapshot Uploader.
+### New features
+- Add Azure Active Directory authentication to SnapshotCollector. Learn more about Azure AD authentication in Application Insights [here](../app/azure-ad-authentication.md).
+
+## [1.3.7.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.5)
+A point release to backport a fix from 1.4.0-pre.
+### Bug fixes
+- Fix [ObjectDisposedException on shutdown](https://github.com/microsoft/ApplicationInsights-dotnet/issues/2097).
+
+## [1.3.7.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.4)
+A point release to address a problem discovered in testing Azure App Service's codeless attach scenario.
+### Changes
+- The netcoreapp3.0 target now depends on Microsoft.ApplicationInsights.AspNetCore >= 2.1.1 (previously >= 2.1.2).
+
+## [1.3.7.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.3)
+A point release to address a couple of high-impact issues.
+### Bug fixes
+- Fixed PDB discovery in the wwwroot/bin folder, which was broken when we changed the symbol search algorithm in 1.3.6.
+- Fixed noisy ExtractWasCalledMultipleTimesException in telemetry.
+
+## [1.3.7](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7)
+### Changes
+- The netcoreapp2.0 target of SnapshotCollector depends on Microsoft.ApplicationInsights.AspNetCore >= 2.1.1 (again). This reverts behavior to how it was before 1.3.5. We tried to upgrade it in 1.3.6, but it broke some Azure App Service scenarios.
+### New features
+- Snapshot Collector reads and parses the ConnectionString from the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable or from the TelemetryConfiguration. Primarily, this is used to set the endpoint for connecting to the Snapshot service. For more information, see the [Connection strings documentation](../app/sdk-connection-string.md).
+### Bug fixes
+- Switched to using HttpClient for all targets except net45 because WebRequest was failing in some environments due to an incompatible SecurityProtocol (requires TLS 1.2).
+
+## [1.3.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.6)
+### Changes
+- SnapshotCollector now depends on Microsoft.ApplicationInsights >= 2.5.1 for all target frameworks. This may be a breaking change if your application depends on an older version of the Microsoft.ApplicationInsights SDK.
+- Remove support for TLS 1.0 and 1.1 in Snapshot Uploader.
+- Period of PDB scans now defaults 24 hours instead of 15 minutes. Configurable via PdbRescanInterval on SnapshotCollectorConfiguration.
+- PDB scan searches top-level folders only, instead of recursive. This may be a breaking change if your symbols are in subfolders of the binary folder.
+### New features
+- Log rotation in SnapshotUploader to avoid filling the logs folder with old files.
+- Deoptimization support (via ReJIT on attach) for .NET Core 3.0 applications.
+- Add symbols to NuGet package.
+- Set additional metadata when uploading minidumps.
+- Added an Initialized property to SnapshotCollectorTelemetryProcessor. It's a CancellationToken, which will be canceled when the Snapshot Collector is completely initialized and connected to the service endpoint.
+- Snapshots can now be captured for exceptions in dynamically generated methods. For example, the compiled expression trees generated by Entity Framework queries.
+### Bug fixes
+- AmbiguousMatchException loading Snapshot Collector due to Status Monitor.
+- GetSnapshotCollector extension method now searches all TelemetrySinks.
+- Don't start the Snapshot Uploader on unsupported platforms.
+- Handle InvalidOperationException when deoptimizing dynamic methods (for example, Entity Framework)
+
+## [1.3.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.5)
+- Add support for Sovereign clouds (Older versions won't work in sovereign clouds)
+- Adding snapshot collector made easier by using AddSnapshotCollector(). More information can be found [here](./snapshot-debugger-app-service.md).
+- Use FISMA MD5 setting for verifying blob blocks. This avoids the default .NET MD5 crypto algorithm, which is unavailable when the OS is set to FIPS-compliant mode.
+- Ignore .NET Framework frames when deoptimizing function calls. This behavior can be controlled by the DeoptimizeIgnoredModules configuration setting.
+- Add `DeoptimizeMethodCount` configuration setting that allows deoptimization of more than one function call. More information here
+
+## [1.3.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.4)
+- Allow structured Instrumentation Keys.
+- Increase SnapshotUploader robustness - continue startup even if old uploader logs can't be moved.
+- Re-enabled reporting additional telemetry when SnapshotUploader.exe exits immediately (was disabled in 1.3.3).
+- Simplify internal telemetry.
+- _Experimental feature_: Snappoint collection plans: Add "snapshotOnFirstOccurence". More information available [here](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).
+
+## [1.3.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.3)
+- Fixed bug that was causing SnapshotUploader.exe to stop responding and not upload snapshots for .NET Core apps.
+
+## [1.3.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.2)
+- _Experimental feature_: Snappoint collection plans. More information available [here](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).
+- SnapshotUploader.exe will exit when the runtime unloads the AppDomain from which SnapshotCollector is loaded, instead of waiting for the process to exit. This improves the collector reliability when hosted in IIS.
+- Add configuration to allow multiple SnapshotCollector instances that are using the same Instrumentation Key to share the same SnapshotUploader process: ShareUploaderProcess (defaults to `true`).
+- Report additional telemetry when SnapshotUploader.exe exits immediately.
+- Reduced the number of support files SnapshotUploader.exe needs to write to disk.
+
+## [1.3.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.1)
+- Remove support for collecting snapshots with the RtlCloneUserProcess API and only support PssCaptureSnapshots API.
+- Increase the default limit on how many snapshot can be captured in 10 minutes from 1 to 3.
+- Allow SnapshotUploader.exe to negotiate TLS 1.1 and 1.2
+- Report additional telemetry when SnapshotUploader logs a warning or an error
+- Stop taking snapshots when the backend service reports the daily quota was reached (50 snapshots per day)
+- Add extra check in SnapshotUploader.exe to not allow two instances to run in the same time.
+
+## [1.3.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.0)
+### Changes
+- For applications targeting .NET Framework, Snapshot Collector now depends on Microsoft.ApplicationInsights version 2.3.0 or above.
+It used to be 2.2.0 or above.
+We believe this won't be an issue for most applications, but let us know if this change prevents you from using the latest Snapshot Collector.
+- Use exponential back-off delays in the Snapshot Uploader when retrying failed uploads.
+- Use ServerTelemetryChannel (if available) for more reliable reporting of telemetry.
+- Use 'SdkInternalOperationsMonitor' on the initial connection to the Snapshot Debugger service so that it's ignored by dependency tracking.
+- Improve telemetry around initial connection to the Snapshot Debugger service.
+- Report additional telemetry for:
+ - Azure App Service version.
+ - Azure compute instances.
+ - Containers.
+ - Azure Function app.
+### Bug fixes
+- When the problem counter reset interval is set to 24 days, interpret that as 24 hours.
+- Fixed a bug where the Snapshot Uploader would stop processing new snapshots if there was an exception while disposing a snapshot.
+
+## [1.2.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.3)
+- Fix strong-name signing with Snapshot Uploader binaries.
+
+## [1.2.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.2)
+### Changes
+- The files needed for SnapshotUploader(64).exe are now embedded as resources in the main DLL. That means the SnapshotCollectorFiles folder is no longer created, simplifying build and deployment and reducing clutter in Solution Explorer. Take care when upgrading to review the changes in your `.csproj` file. The `Microsoft.ApplicationInsights.SnapshotCollector.targets` file is no longer needed.
+- Telemetry is logged to your Application Insights resource even if ProvideAnonymousTelemetry is set to false. This is so we can implement a health check feature in the Azure portal. ProvideAnonymousTelemetry affects only the telemetry sent to Microsoft for product support and improvement.
+- When the TempFolder or ShadowCopyFolder are redirected to environment variables, keep the collector idle until those environment variables are set.
+- For applications that connect to the Internet via a proxy server, Snapshot Collector will now autodetect any proxy settings and pass them on to SnapshotUploader.exe.
+- Lower the priority of the SnapshotUplaoder process (where possible). This priority can be overridden via the IsLowPrioirtySnapshotUploader option.
+- Added a GetSnapshotCollector extension method on TelemetryConfiguration for scenarios where you want to configure the Snapshot Collector programmatically.
+- Set the Application Insights SDK version (instead of the application version) in customer-facing telemetry.
+- Send the first heartbeat event after two minutes.
+### Bug fixes
+- Fix NullReferenceException when exceptions have null or immutable Data dictionaries.
+- In the uploader, retry PDB matching a few times if we get a sharing violation.
+- Fix duplicate telemetry when more than one thread calls into the telemetry pipeline at startup.
+
+## [1.2.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.1)
+### Changes
+- XML Doc comment files are now included in the NuGet package.
+- Added an ExcludeFromSnapshotting extension method on `System.Exception` for scenarios where you know you have a noisy exception and want to avoid creating snapshots for it.
+- Added an IsEnabledWhenProfiling configuration property, defaults to true. This is a change from previous versions where snapshot creation was temporarily disabled if the Application Insights Profiler was performing a detailed collection. The old behavior can be recovered by setting this property to false.
+### Bug fixes
+- Sign SnapshotUploader64.exe properly.
+- Protect against double-initialization of the telemetry processor.
+- Prevent double logging of telemetry in apps with multiple pipelines.
+- Fix a bug with the expiration time of a collection plan, which could prevent snapshots after 24 hours.
+
+## [1.2.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.0)
+The biggest change in this version (hence the move to a new minor version number) is a rewrite of the snapshot creation and handling pipeline. In previous versions, this functionality was implemented in native code (ProductionBreakpoints*.dll and SnapshotHolder*.exe). The new implementation is all managed code with P/Invokes. For this first version using the new pipeline, we haven't strayed far from the original behavior. The new implementation allows for better error reporting and sets us up for future improvements.
+
+### Other changes in this version
+- MinidumpUploader.exe has been renamed to SnapshotUploader.exe (or SnapshotUploader64.exe).
+- Added timing telemetry to DeOptimize/ReOptimize requests.
+- Added gzip compression for minidump uploads.
+- Fixed a problem where PDBs were locked preventing site upgrade.
+- Log the original folder name (SnapshotCollectorFiles) when shadow-copying.
+- Adjust memory limits for 64-bit processes to prevent site restarts due to OOM.
+- Fix an issue where snapshots were still collected even after disabling.
+- Log heartbeat events to customer's AI resource.
+- Improve snapshot speed by removing "Source" from Problem ID.
+
+## [1.1.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.2)
+### Changes
+Augmented usage telemetry
+- Detect and report .NET version and OS
+- Detect and report additional Azure Environments (Cloud Service, Service Fabric)
+- Record and report exception metrics (number of 1st chance exceptions and number of TrackException calls) in Heartbeat telemetry.
+### Bug fixes
+- Correct handling of SqlException where the inner exception (Win32Exception) isn't thrown.
+- Trim trailing spaces on symbol folders, which caused an incorrect parse of command-line arguments to the MinidumpUploader.
+- Prevent infinite retry of failed connections to the Snapshot Debugger agent's endpoint.
+
+## [1.1.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.0)
+### Changes
+- Added host memory protection. This feature reduces the impact on the host machine's memory.
+- Improve the Azure portal snapshot viewing experience.
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
+
+ Title: Enable Snapshot Debugger for .NET apps in Azure App Service | Microsoft Docs
+description: Enable Snapshot Debugger for .NET apps in Azure App Service
+ Last updated : 03/26/2019++
+# Enable Snapshot Debugger for .NET apps in Azure App Service
+
+Snapshot Debugger currently supports ASP.NET and ASP.NET Core apps that are running on Azure App Service on Windows service plans.
+
+We recommend you run your application on the Basic service tier, or higher, when using snapshot debugger.
+
+For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots.
+
+## <a id="installation"></a> Enable Snapshot Debugger
+To enable Snapshot Debugger for an app, follow the instructions below.
+
+If you're running a different type of Azure service, here are instructions for enabling Snapshot Debugger on other supported platforms:
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+
+> [!NOTE]
+> If you're using a preview version of .NET Core, or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) to include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package with the application, and then complete the rest of the instructions below.
+>
+> Codeless installation of Application Insights Snapshot Debugger follows the .NET Core support policy.
+> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+
+Snapshot Debugger is pre-installed as part of the App Services runtime, but you need to turn it on to get snapshots for your App Service app.
+
+Once you've deployed an app, follow the steps below to enable the snapshot debugger:
+
+1. Navigate to the Azure control panel for your App Service.
+2. Go to the **Settings > Application Insights** page.
+
+ ![Enable App Insights on App Services portal](./media/snapshot-debugger/application-insights-app-services.png)
+
+3. Either follow the instructions on the page to create a new resource or select an existing App Insights resource to monitor your app. Also make sure both switches for Snapshot Debugger are **On**.
+
+ ![Add App Insights site extension][Enablement UI]
+
+4. Snapshot Debugger is now enabled using an App Services App Setting.
+
+ ![App Setting for Snapshot Debugger][snapshot-debugger-app-setting]
+
+## Enable Snapshot Debugger for other clouds
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide) through the Application Insights Connection String.
+
+|Connection String Property | US Government Cloud | China Cloud |
+|||-|
+|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+For more information about other connection overrides, see [Application Insights documentation](../app/sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
+
+## Enable Azure Active Directory authentication for snapshot ingestion
+
+Application Insights Snapshot Debugger supports Azure AD authentication for snapshot ingestion. This means, for all snapshots of your application to be ingested, your application must be authenticated and provide the required application settings to the Snapshot Debugger agent.
+
+As of today, Snapshot Debugger only supports Azure AD authentication when you reference and configure Azure AD using the Application Insights SDK in your application.
+
+Below you can find all the steps required to enable Azure AD for profiles ingestion:
+1. Create and add the managed identity you want to use to authenticate against your Application Insights resource to your App Service.
+
+ a. For System-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity)
+
+ b. For User-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
+
+2. Configure and enable Azure AD in your Application Insights resource. For more information, see the following [documentation](../app/azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication)
+3. Add the following application setting, used to let Snapshot Debugger agent know which managed identity to use:
+
+For System-Assigned Identity:
+
+|App Setting | Value |
+||-|
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD |
+
+For User-Assigned Identity:
+
+|App Setting | Value |
+||-|
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD;ClientId={Client id of the User-Assigned Identity} |
+
+## Disable Snapshot Debugger
+
+Follow the same steps as for **Enable Snapshot Debugger**, but switch both switches for Snapshot Debugger to **Off**.
+
+We recommend you have Snapshot Debugger enabled on all your apps to ease diagnostics of application exceptions.
+
+## Azure Resource Manager template
+
+For an Azure App Service, you can set app settings within the Azure Resource Manager template to enable Snapshot Debugger and Profiler, see the below template snippet:
+
+```json
+{
+ "apiVersion": "2015-08-01",
+ "name": "[parameters('webSiteName')]",
+ "type": "Microsoft.Web/sites",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[variables('hostingPlanName')]"
+ ],
+ "tags": {
+ "[concat('hidden-related:', resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName')))]": "empty",
+ "displayName": "Website"
+ },
+ "properties": {
+ "name": "[parameters('webSiteName')]",
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
+ },
+ "resources": [
+ {
+ "apiVersion": "2015-08-01",
+ "name": "appsettings",
+ "type": "config",
+ "dependsOn": [
+ "[parameters('webSiteName')]",
+ "[concat('AppInsights', parameters('webSiteName'))]"
+ ],
+ "properties": {
+ "APPINSIGHTS_INSTRUMENTATIONKEY": "[reference(resourceId('Microsoft.Insights/components', concat('AppInsights', parameters('webSiteName'))), '2014-04-01').InstrumentationKey]",
+ "APPINSIGHTS_PROFILERFEATURE_VERSION": "1.0.0",
+ "APPINSIGHTS_SNAPSHOTFEATURE_VERSION": "1.0.0",
+ "DiagnosticServices_EXTENSION_VERSION": "~3",
+ "ApplicationInsightsAgent_EXTENSION_VERSION": "~2"
+ }
+ }
+ ]
+},
+```
+
+## Not Supported Scenarios
+Below you can find scenarios where Snapshot Collector is not supported:
+
+|Scenario | Side Effects | Recommendation |
+||--|-|
+|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+
+## Next steps
+
+- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+- See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+
+[Enablement UI]: ./media/snapshot-debugger/enablement-ui.png
+[snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-function-app.md
+
+ Title: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions | Microsoft Docs
+description: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions
+ Last updated : 12/18/2020+++
+# Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions
+
+Snapshot Debugger currently works for ASP.NET and ASP.NET Core apps that are running on Azure Functions on Windows Service Plans.
+
+We recommend you run your application on the Basic service tier or higher when using Snapshot Debugger.
+
+For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots.
+
+## Prerequisites
+
+* [Enable Application Insights monitoring in your Function App](../../azure-functions/configure-monitoring.md#add-to-an-existing-function-app)
+
+## Enable Snapshot Debugger
+
+If you're running a different type of Azure service, here are instructions for enabling Snapshot Debugger on other supported platforms:
+* [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+
+To enable Snapshot Debugger in your Function app, you have to update your `host.json` file by adding the property `snapshotConfiguration` as defined below and redeploy your function.
+
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "snapshotConfiguration": {
+ "isEnabled": true
+ }
+ }
+ }
+}
+```
+
+Snapshot Debugger is pre-installed as part of the Azure Functions runtime, which by default it's disabled.
+
+Since Snapshot Debugger it's included in the Azure Functions runtime, it isn't needed to add extra NuGet packages nor application settings.
+
+Just as reference, for a simple Function app (.NET Core), below is how it will look the `.csproj`, `{Your}Function.cs`, and `host.json` after enabled Snapshot Debugger on it.
+
+Project csproj
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+<PropertyGroup>
+ <TargetFramework>netcoreapp2.1</TargetFramework>
+ <AzureFunctionsVersion>v2</AzureFunctionsVersion>
+</PropertyGroup>
+<ItemGroup>
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.31" />
+</ItemGroup>
+<ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+</ItemGroup>
+</Project>
+```
+
+Function class
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.AspNetCore.Http;
+using Microsoft.Extensions.Logging;
+
+namespace SnapshotCollectorAzureFunction
+{
+ public static class ExceptionFunction
+ {
+ [FunctionName("ExceptionFunction")]
+ public static Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req,
+ ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ throw new NotImplementedException("Dummy");
+ }
+ }
+}
+```
+
+Host file
+
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true
+ }
+ }
+ }
+}
+```
+
+## Enable Snapshot Debugger for other clouds
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+
+Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true,
+ "agentEndpoint": "https://snapshot.monitor.azure.us"
+ }
+ }
+ }
+}
+```
+
+Below are the supported overrides of the Snapshot Debugger agent endpoint:
+
+|Property | US Government Cloud | China Cloud |
+|||-|
+|AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+## Disable Snapshot Debugger
+
+To disable Snapshot Debugger in your Function app, you just need to update your `host.json` file by setting to `false` the property `snapshotConfiguration.isEnabled`.
+
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "snapshotConfiguration": {
+ "isEnabled": false
+ }
+ }
+ }
+}
+```
+
+We recommend you have Snapshot Debugger enabled on all your apps to ease diagnostics of application exceptions.
+
+## Next steps
+
+- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+- [View snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+- Customize Snapshot Debugger configuration based on your use-case on your Function app. For more info, see [snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
+
+ Title: Troubleshoot Azure Application Insights Snapshot Debugger
+description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Snapshot Debugger.
+ Last updated : 03/07/2019+++
+# <a id="troubleshooting"></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots
+If you enabled Application Insights Snapshot Debugger for your application, but aren't seeing snapshots for exceptions, you can use these instructions to troubleshoot.
+
+There can be many different reasons why snapshots aren't generated. You can start by running the snapshot health check to identify some of the possible common causes.
+
+## Not Supported Scenarios
+Below you can find scenarios where Snapshot Collector is not supported:
+
+|Scenario | Side Effects | Recommendation |
+||--|-|
+|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+
+## Make sure you're using the appropriate Snapshot Debugger Endpoint
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+
+For App Service and applications using the Application Insights SDK, you have to update the connection string using the supported overrides for Snapshot Debugger as defined below:
+
+|Connection String Property | US Government Cloud | China Cloud |
+|||-|
+|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+For more information about other connection overrides, see [Application Insights documentation](../app/sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
+
+For Function App, you have to update the `host.json` using the supported overrides below:
+
+|Property | US Government Cloud | China Cloud |
+|||-|
+|AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true,
+ "agentEndpoint": "https://snapshot.monitor.azure.us"
+ }
+ }
+ }
+}
+```
+
+## Use the snapshot health check
+Several common problems result in the Open Debug Snapshot not showing up. Using an outdated Snapshot Collector, for example; reaching the daily upload limit; or perhaps the snapshot is just taking a long time to upload. Use the Snapshot Health Check to troubleshoot common problems.
+
+There's a link in the exception pane of the end-to-end trace view that takes you to the Snapshot Health Check.
+
+![Enter snapshot health check](./media/snapshot-debugger/enter-snapshot-health-check.png)
+
+The interactive, chat-like interface looks for common problems and guides you to fix them.
+
+![Health Check](./media/snapshot-debugger/health-check.png)
+
+If that doesn't solve the problem, then refer to the following manual troubleshooting steps.
+
+## Verify the instrumentation key
+
+Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the ApplicationInsights.config file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal.
++
+## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
+
+If you have an ASP.NET application that it is hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
+
+[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md?toc=/azure/azure-monitor/toc.json). The set of SSL security protocols is one of the quirks enabled by the httpRuntime targetFramework value in the system.web section of web.config.
+If the httpRuntime targetFramework is 4.5.2 or lower, then TLS 1.2 isn't included by default.
+
+> [!NOTE]
+> The httpRuntime targetFramework value is independent of the target framework used when building your application.
+
+To check the setting, open your web.config file and find the system.web section. Ensure that the `targetFramework` for `httpRuntime` is set to 4.6 or above.
+
+ ```xml
+ <system.web>
+ ...
+ <httpRuntime targetFramework="4.7.2" />
+ ...
+ </system.web>
+ ```
+
+> [!NOTE]
+> Modifying the httpRuntime targetFramework value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Retargeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes).
+
+> [!NOTE]
+> If the targetFramework is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you are using your own virtual machine, you may need to enable TLS 1.2 in the OS.
+
+## Preview Versions of .NET Core
+If you're using a preview version of .NET Core or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json).
+
+## Check the Diagnostic Services site extension' Status Page
+If Snapshot Debugger was enabled through the [Application Insights pane](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json) in the portal, it was enabled by the Diagnostic Services site extension.
+
+> [!NOTE]
+> Codeless installation of Application Insights Snapshot Debugger follows the .NET Core support policy.
+> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+
+You can check the Status Page of this extension by going to the following url:
+`https://{site-name}.scm.azurewebsites.net/DiagnosticServices`
+
+> [!NOTE]
+> The domain of the Status Page link will vary depending on the cloud.
+This domain will be the same as the Kudu management site for App Service.
+
+This Status Page shows the installation state of the Profiler and Snapshot Collector agents. If there was an unexpected error, it will be displayed and show how to fix it.
+
+You can use the Kudu management site for App Service to get the base url of this Status Page:
+1. Open your App Service application in the Azure portal.
+2. Select **Advanced Tools**, or search for **Kudu**.
+3. Select **Go**.
+4. Once you are on the Kudu management site, in the URL, **append the following `/DiagnosticServices` and press enter**.
+ It will end like this: `https://<kudu-url>/DiagnosticServices`
+
+## Upgrade to the latest version of the NuGet package
+Based on how Snapshot Debugger was enabled, see the following options:
+
+* If Snapshot Debugger was enabled through the [Application Insights pane in the portal](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json), then your application should already be running the latest NuGet package.
+
+* If Snapshot Debugger was enabled by including the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package, use Visual Studio's NuGet Package Manager to make sure you're using the latest version of Microsoft.ApplicationInsights.SnapshotCollector.
+
+For the latest updates and bug fixes [consult the release notes](./snapshot-collector-release-notes.md).
+
+## Check the uploader logs
+
+After a snapshot is created, a minidump file (.dmp) is created on disk. A separate uploader process creates that minidump file and uploads it, along with any associated PDBs, to Application Insights Snapshot Debugger storage. After the minidump has uploaded successfully, it's deleted from disk. The log files for the uploader process are kept on disk. In an App Service environment, you can find these logs in `D:\Home\LogFiles`. Use the Kudu management site for App Service to find these log files.
+
+1. Open your App Service application in the Azure portal.
+2. Select **Advanced Tools**, or search for **Kudu**.
+3. Select **Go**.
+4. In the **Debug console** drop-down list box, select **CMD**.
+5. Select **LogFiles**.
+
+You should see at least one file with a name that begins with `Uploader_` or `SnapshotUploader_` and a `.log` extension. Select the appropriate icon to download any log files or open them in a browser.
+The file name includes a unique suffix that identifies the App Service instance. If your App Service instance is hosted on more than one machine, there are separate log files for each machine. When the uploader detects a new minidump file, it's recorded in the log file. Here's an example of a successful snapshot and upload:
+
+```
+SnapshotUploader.exe Information: 0 : Received Fork request ID 139e411a23934dc0b9ea08a626db16c5 from process 6368 (Low pri)
+ DateTime=2018-03-09T01:42:41.8571711Z
+SnapshotUploader.exe Information: 0 : Creating minidump from Fork request ID 139e411a23934dc0b9ea08a626db16c5 from process 6368 (Low pri)
+ DateTime=2018-03-09T01:42:41.8571711Z
+SnapshotUploader.exe Information: 0 : Dump placeholder file created: 139e411a23934dc0b9ea08a626db16c5.dm_
+ DateTime=2018-03-09T01:42:41.8728496Z
+SnapshotUploader.exe Information: 0 : Dump available 139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:45.7525022Z
+SnapshotUploader.exe Information: 0 : Successfully wrote minidump to D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:45.7681360Z
+SnapshotUploader.exe Information: 0 : Uploading D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp, 214.42 MB (uncompressed)
+ DateTime=2018-03-09T01:42:45.7681360Z
+SnapshotUploader.exe Information: 0 : Upload successful. Compressed size 86.56 MB
+ DateTime=2018-03-09T01:42:59.6184651Z
+SnapshotUploader.exe Information: 0 : Extracting PDB info from D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp.
+ DateTime=2018-03-09T01:42:59.6184651Z
+SnapshotUploader.exe Information: 0 : Matched 2 PDB(s) with local files.
+ DateTime=2018-03-09T01:42:59.6809606Z
+SnapshotUploader.exe Information: 0 : Stamp does not want any of our matched PDBs.
+ DateTime=2018-03-09T01:42:59.8059929Z
+SnapshotUploader.exe Information: 0 : Deleted D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:59.8530649Z
+```
+
+> [!NOTE]
+> The example above is from version 1.2.0 of the Microsoft.ApplicationInsights.SnapshotCollector NuGet package. In earlier versions, the uploader process is called `MinidumpUploader.exe` and the log is less detailed.
+
+In the previous example, the instrumentation key is `c12a605e73c44346a984e00000000000`. This value should match the instrumentation key for your application.
+The minidump is associated with a snapshot with the ID `139e411a23934dc0b9ea08a626db16c5`. You can use this ID later to locate the associated exception record in Application Insights Analytics.
+
+The uploader scans for new PDBs about once every 15 minutes. Here's an example:
+
+```
+SnapshotUploader.exe Information: 0 : PDB rescan requested.
+ DateTime=2018-03-09T01:47:19.4457768Z
+SnapshotUploader.exe Information: 0 : Scanning D:\home\site\wwwroot for local PDBs.
+ DateTime=2018-03-09T01:47:19.4457768Z
+SnapshotUploader.exe Information: 0 : Local PDB scan complete. Found 2 PDB(s).
+ DateTime=2018-03-09T01:47:19.4614027Z
+SnapshotUploader.exe Information: 0 : Deleted PDB scan marker : D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\6368.pdbscan
+ DateTime=2018-03-09T01:47:19.4614027Z
+```
+
+For applications that _aren't_ hosted in App Service, the uploader logs are in the same folder as the minidumps: `%TEMP%\Dumps\<ikey>` (where `<ikey>` is your instrumentation key).
+
+## Troubleshooting Cloud Services
+In Cloud Services, the default temporary folder could be too small to hold the minidump files, leading to lost snapshots.
+
+The space needed depends on the total working set of your application and the number of concurrent snapshots.
+
+The working set of a 32-bit ASP.NET web role is typically between 200 MB and 500 MB. Allow for at least two concurrent snapshots.
+
+For example, if your application uses 1 GB of total working set, you should make sure there is at least 2 GB of disk space to store snapshots.
+
+Follow these steps to configure your Cloud Service role with a dedicated local resource for snapshots.
+
+1. Add a new local resource to your Cloud Service by editing the Cloud Service definition (.csdef) file. The following example defines a resource called `SnapshotStore` with a size of 5 GB.
+ ```xml
+ <LocalResources>
+ <LocalStorage name="SnapshotStore" cleanOnRoleRecycle="false" sizeInMB="5120" />
+ </LocalResources>
+ ```
+
+2. Modify your role's startup code to add an environment variable that points to the `SnapshotStore` local resource. For Worker Roles, the code should be added to your role's `OnStart` method:
+ ```csharp
+ public override bool OnStart()
+ {
+ Environment.SetEnvironmentVariable("SNAPSHOTSTORE", RoleEnvironment.GetLocalResource("SnapshotStore").RootPath);
+ return base.OnStart();
+ }
+ ```
+ For Web Roles (ASP.NET), the code should be added to your web application's `Application_Start` method:
+ ```csharp
+ using Microsoft.WindowsAzure.ServiceRuntime;
+ using System;
+
+ namespace MyWebRoleApp
+ {
+ public class MyMvcApplication : System.Web.HttpApplication
+ {
+ protected void Application_Start()
+ {
+ Environment.SetEnvironmentVariable("SNAPSHOTSTORE", RoleEnvironment.GetLocalResource("SnapshotStore").RootPath);
+ // TODO: The rest of your application startup code
+ }
+ }
+ }
+ ```
+
+3. Update your role's ApplicationInsights.config file to override the temporary folder location used by `SnapshotCollector`
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- Use the SnapshotStore local resource for snapshots -->
+ <TempFolder>%SNAPSHOTSTORE%</TempFolder>
+ <!-- Other SnapshotCollector configuration options -->
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+## Overriding the Shadow Copy folder
+
+When the Snapshot Collector starts up, it tries to find a folder on disk that is suitable for running the Snapshot Uploader process. The chosen folder is known as the Shadow Copy folder.
+
+The Snapshot Collector checks a few well-known locations, making sure it has permissions to copy the Snapshot Uploader binaries. The following environment variables are used:
+- Fabric_Folder_App_Temp
+- LOCALAPPDATA
+- APPDATA
+- TEMP
+
+If a suitable folder can't be found, Snapshot Collector reports an error saying _"Couldn't find a suitable shadow copy folder."_
+
+If the copy fails, Snapshot Collector reports a `ShadowCopyFailed` error.
+
+If the uploader can't be launched, Snapshot Collector reports an `UploaderCannotStartFromShadowCopy` error. The body of the message often contains `System.UnauthorizedAccessException`. This error usually occurs because the application is running under an account with reduced permissions. The account has permission to write to the shadow copy folder, but it doesn't have permission to execute code.
+
+Since these errors usually happen during startup, they'll usually be followed by an `ExceptionDuringConnect` error saying _"Uploader failed to start."_
+
+To work around these errors, you can specify the shadow copy folder manually via the `ShadowCopyFolder` configuration option. For example, using ApplicationInsights.config:
+
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- Override the default shadow copy folder. -->
+ <ShadowCopyFolder>D:\SnapshotUploader</ShadowCopyFolder>
+ <!-- Other SnapshotCollector configuration options -->
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+Or, if you're using appsettings.json with a .NET Core application:
+
+ ```json
+ {
+ "ApplicationInsights": {
+ "InstrumentationKey": "<your instrumentation key>"
+ },
+ "SnapshotCollectorConfiguration": {
+ "ShadowCopyFolder": "D:\\SnapshotUploader"
+ }
+ }
+ ```
+
+## Use Application Insights search to find exceptions with snapshots
+
+When a snapshot is created, the throwing exception is tagged with a snapshot ID. That snapshot ID is included as a custom property when the exception is reported to Application Insights. Using **Search** in Application Insights, you can find all records with the `ai.snapshot.id` custom property.
+
+1. Browse to your Application Insights resource in the Azure portal.
+2. Select **Search**.
+3. Type `ai.snapshot.id` in the Search text box and press Enter.
+
+![Search for telemetry with a snapshot ID in the portal](./media/snapshot-debugger/search-snapshot-portal.png)
+
+If this search returns no results, then, no snapshots were reported to Application Insights in the selected time range.
+
+To search for a specific snapshot ID from the Uploader logs, type that ID in the Search box. If you can't find records for a snapshot that you know was uploaded, follow these steps:
+
+1. Double-check that you're looking at the right Application Insights resource by verifying the instrumentation key.
+
+2. Using the timestamp from the Uploader log, adjust the Time Range filter of the search to cover that time range.
+
+If you still don't see an exception with that snapshot ID, then the exception record wasn't reported to Application Insights. This situation can happen if your application crashed after it took the snapshot but before it reported the exception record. In this case, check the App Service logs under `Diagnose and solve problems` to see if there were unexpected restarts or unhandled exceptions.
+
+## Edit network proxy or firewall rules
+
+If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Snapshot Debugger service.
+
+The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-upgrade.md
+
+ Title: Upgrading Azure Application Insights Snapshot Debugger
+description: How to upgrade Snapshot Debugger for .NET apps to the latest version on Azure App Services, or via Nuget packages
+ Last updated : 03/28/2019+++
+# Upgrading the Snapshot Debugger
+
+To provide the best possible security for your data, Microsoft is moving away from TLS 1.0 and TLS 1.1, which have been shown to be vulnerable to determined attackers. If you're using an older version of the site extension, it will require an upgrade to continue working. This document outlines the steps needed to upgrade your Snapshot debugger to the latest version.
+There are two primary upgrade paths depending on if you enabled the Snapshot Debugger using a site extension or if you used an SDK/Nuget added to your application. Both upgrade paths are discussed below.
+
+## Upgrading the site extension
+
+> [!IMPORTANT]
+> Older versions of Application Insights used a private site extension called _Application Insights extension for Azure App Service_. The current Application Insights experience is enabled by setting App Settings to light up a pre-installed site extension.
+> To avoid conflicts, which may cause your site to stop working, it is important to delete the private site extension first. See step 4 below.
+
+If you enabled the Snapshot debugger using the site extension, you can upgrade using the following procedure:
+
+1. Sign in to the Azure portal.
+2. Navigate to your resource that has Application Insights and Snapshot debugger enabled. For example, for a Web App, navigate to the App Service resource:
+
+ ![Screenshot of individual App Service resource named DiagService01](./media/snapshot-debugger-upgrade/app-service-resource.png)
+
+3. Once you've navigated to your resource, click on the Extensions blade and wait for the list of extensions to populate:
+
+ ![Screenshot of App Service Extensions showing Application Insights extension for Azure App Service installed](./media/snapshot-debugger-upgrade/application-insights-site-extension-to-be-deleted.png)
+
+4. If any version of _Application Insights extension for Azure App Service_ is installed, then select it and click Delete. Confirm **Yes** to delete the extension and wait for the delete to complete before moving to the next step.
+
+ ![Screenshot of App Service Extensions showing Application Insights extension for Azure App Service with the Delete button highlighted](./media/snapshot-debugger-upgrade/application-insights-site-extension-delete.png)
+
+5. Go to the Overview blade of your resource and click on Application Insights:
+
+ ![Screenshot of three buttons. Center button with name Application Insights is selected](./media/snapshot-debugger-upgrade/application-insights-button.png)
+
+6. If this is the first time you've viewed the Application Insights blade for this App Service, you'll be prompted to turn on Application Insights. Select **Turn on Application Insights**.
+
+ ![Screenshot of the first-time experience for the Application Insights blade with the Turn on Application Insights button highlighted](./media/snapshot-debugger-upgrade/turn-on-application-insights.png)
+
+7. The current Application Insights settings are displayed. Unless you want to take the opportunity to change your settings, you can leave them as is. The **Apply** button on the bottom of the blade isn't enabled by default and you'll have to toggle one of the settings to activate the button. You donΓÇÖt have to change any actual settings, rather you can change the setting and then immediately change it back. We recommend toggling the Profiler setting and then selecting **Apply**.
+
+ ![Screenshot of Application Insights App Service Configuration page with Apply button highlighted in red](./media/snapshot-debugger-upgrade/view-application-insights-data.png)
+
+8. Once you click **Apply**, you'll be asked to confirm the changes.
+
+ > [!NOTE]
+ > The site will be restarted as part of the upgrade process.
+
+ ![Screenshot of App Service's apply monitoring prompt. Text box displays message: "We will now apply changes to your app settings and install our tools to link your Application Insights resource to the web app. This will restart the site. Do you want to continue?"](./media/snapshot-debugger-upgrade/apply-monitoring-settings.png)
+
+9. Click **Yes** to apply the changes and wait for the process to complete.
+
+The site has now been upgraded and is ready to use.
+
+## Upgrading Snapshot Debugger using SDK/Nuget
+
+If the application is using a version of `Microsoft.ApplicationInsights.SnapshotCollector` below version 1.3.1, it will need to be upgraded to a [newer version](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) to continue working.
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
+
+ Title: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines | Microsoft Docs
+description: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines
+ Last updated : 03/07/2019+++
+# Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines
+
+If your ASP.NET or ASP.NET core application runs in Azure App Service, it's highly recommended to [enable Snapshot Debugger through the Application Insights portal page](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json). However, if your application requires a customized Snapshot Debugger configuration, or a preview version of .NET core, then this instruction should be followed ***in addition*** to the instructions for [enabling through the Application Insights portal page](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json).
+
+If your application runs in Azure Service Fabric, Cloud Service, Virtual Machines, or on-premises machines, the following instructions should be used.
+
+## Configure snapshot collection for ASP.NET applications
+
+1. [Enable Application Insights in your web app](../app/asp-net.md), if you haven't done it yet.
+
+2. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+
+3. If needed, customized the Snapshot Debugger configuration added to [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md). The default Snapshot Debugger configuration is mostly empty and all settings are optional. Here is an example showing a configuration equivalent to the default configuration:
+
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- The default is true, but you can disable Snapshot Debugging by setting it to false -->
+ <IsEnabled>true</IsEnabled>
+ <!-- Snapshot Debugging is usually disabled in developer mode, but you can enable it by setting this to true. -->
+ <!-- DeveloperMode is a property on the active TelemetryChannel. -->
+ <IsEnabledInDeveloperMode>false</IsEnabledInDeveloperMode>
+ <!-- How many times we need to see an exception before we ask for snapshots. -->
+ <ThresholdForSnapshotting>1</ThresholdForSnapshotting>
+ <!-- The maximum number of examples we create for a single problem. -->
+ <MaximumSnapshotsRequired>3</MaximumSnapshotsRequired>
+ <!-- The maximum number of problems that we can be tracking at any time. -->
+ <MaximumCollectionPlanSize>50</MaximumCollectionPlanSize>
+ <!-- How often we reconnect to the stamp. The default value is 15 minutes.-->
+ <ReconnectInterval>00:15:00</ReconnectInterval>
+ <!-- How often to reset problem counters. -->
+ <ProblemCounterResetInterval>1.00:00:00</ProblemCounterResetInterval>
+ <!-- The maximum number of snapshots allowed in ten minutes.The default value is 1. -->
+ <SnapshotsPerTenMinutesLimit>3</SnapshotsPerTenMinutesLimit>
+ <!-- The maximum number of snapshots allowed per day. -->
+ <SnapshotsPerDayLimit>30</SnapshotsPerDayLimit>
+ <!-- Whether or not to collect snapshot in low IO priority thread. The default value is true. -->
+ <SnapshotInLowPriorityThread>true</SnapshotInLowPriorityThread>
+ <!-- Agree to send anonymous data to Microsoft to make this product better. -->
+ <ProvideAnonymousTelemetry>true</ProvideAnonymousTelemetry>
+ <!-- The limit on the number of failed requests to request snapshots before the telemetry processor is disabled. -->
+ <FailedRequestLimit>3</FailedRequestLimit>
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+4. Snapshots are collected only on exceptions that are reported to Application Insights. In some cases (for example, older versions of the .NET platform), you might need to [configure exception collection](../app/asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal.
++
+## Configure snapshot collection for applications using ASP.NET Core LTS or above
+
+1. [Enable Application Insights in your ASP.NET Core web app](../app/asp-net-core.md), if you haven't done it yet.
+
+ > [!NOTE]
+ > Be sure that your application references version 2.1.1, or newer, of the Microsoft.ApplicationInsights.AspNetCore package.
+
+2. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+
+3. Modify your application's `Startup` class to add and configure the Snapshot Collector's telemetry processor.
+ 1. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.5 or above is used, then add the following using statements to `Startup.cs`.
+
+ ```csharp
+ using Microsoft.ApplicationInsights.SnapshotCollector;
+ ```
+
+ Add the following at the end of the ConfigureServices method in the `Startup` class in `Startup.cs`.
+
+ ```csharp
+ services.AddSnapshotCollector((configuration) => Configuration.Bind(nameof(SnapshotCollectorConfiguration), configuration));
+ ```
+ 2. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.4 or below is used, then add the following using statements to `Startup.cs`.
+
+ ```csharp
+ using Microsoft.ApplicationInsights.SnapshotCollector;
+ using Microsoft.Extensions.Options;
+ using Microsoft.ApplicationInsights.AspNetCore;
+ using Microsoft.ApplicationInsights.Extensibility;
+ ```
+
+ Add the following `SnapshotCollectorTelemetryProcessorFactory` class to `Startup` class.
+
+ ```csharp
+ class Startup
+ {
+ private class SnapshotCollectorTelemetryProcessorFactory : ITelemetryProcessorFactory
+ {
+ private readonly IServiceProvider _serviceProvider;
+
+ public SnapshotCollectorTelemetryProcessorFactory(IServiceProvider serviceProvider) =>
+ _serviceProvider = serviceProvider;
+
+ public ITelemetryProcessor Create(ITelemetryProcessor next)
+ {
+ var snapshotConfigurationOptions = _serviceProvider.GetService<IOptions<SnapshotCollectorConfiguration>>();
+ return new SnapshotCollectorTelemetryProcessor(next, configuration: snapshotConfigurationOptions.Value);
+ }
+ }
+ ...
+ ```
+ Add the `SnapshotCollectorConfiguration` and `SnapshotCollectorTelemetryProcessorFactory` services to the startup pipeline:
+
+ ```csharp
+ // This method gets called by the runtime. Use this method to add services to the container.
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // Configure SnapshotCollector from application settings
+ services.Configure<SnapshotCollectorConfiguration>(Configuration.GetSection(nameof(SnapshotCollectorConfiguration)));
+
+ // Add SnapshotCollector telemetry processor.
+ services.AddSingleton<ITelemetryProcessorFactory>(sp => new SnapshotCollectorTelemetryProcessorFactory(sp));
+
+ // TODO: Add other services your application needs here.
+ }
+ }
+ ```
+
+4. If needed, customized the Snapshot Debugger configuration by adding a SnapshotCollectorConfiguration section to appsettings.json. All settings in the Snapshot Debugger configuration are optional. Here is an example showing a configuration equivalent to the default configuration:
+
+ ```json
+ {
+ "SnapshotCollectorConfiguration": {
+ "IsEnabledInDeveloperMode": false,
+ "ThresholdForSnapshotting": 1,
+ "MaximumSnapshotsRequired": 3,
+ "MaximumCollectionPlanSize": 50,
+ "ReconnectInterval": "00:15:00",
+ "ProblemCounterResetInterval":"1.00:00:00",
+ "SnapshotsPerTenMinutesLimit": 1,
+ "SnapshotsPerDayLimit": 30,
+ "SnapshotInLowPriorityThread": true,
+ "ProvideAnonymousTelemetry": true,
+ "FailedRequestLimit": 3
+ }
+ }
+ ```
+
+## Configure snapshot collection for other .NET applications
+
+1. If your application isn't already instrumented with Application Insights, get started by [enabling Application Insights and setting the instrumentation key](../app/windows-desktop.md).
+
+2. Add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+
+3. Snapshots are collected only on exceptions that are reported to Application Insights. You may need to modify your code to report them. The exception handling code depends on the structure of your application, but an example is below:
+
+ ```csharp
+ TelemetryClient _telemetryClient = new TelemetryClient();
+
+ void ExampleRequest()
+ {
+ try
+ {
+ // TODO: Handle the request.
+ }
+ catch (Exception ex)
+ {
+ // Report the exception to Application Insights.
+ _telemetryClient.TrackException(ex);
+
+ // TODO: Rethrow the exception if desired.
+ }
+ }
+ ```
+## Next steps
+
+- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+- See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
+
+ Title: Azure Application Insights Snapshot Debugger for .NET apps
+description: Debug snapshots are automatically collected when exceptions are thrown in production .NET apps
++ Last updated : 10/12/2021+++
+# Debug snapshots on exceptions in .NET apps
+When an exception occurs, you can automatically collect a debug snapshot from your live web application. The snapshot shows the state of source code and variables at the moment the exception was thrown. The Snapshot Debugger in [Azure Application Insights](../app/app-insights-overview.md) monitors exception telemetry from your web app. It collects snapshots on your top-throwing exceptions so that you have the information you need to diagnose issues in production. Include the [Snapshot collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application, and optionally configure collection parameters in [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md). Snapshots appear on [exceptions](../app/asp-net-exceptions.md) in the Application Insights portal.
+
+You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio 2019 Enterprise. In Visual Studio, you can also [set Snappoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
+
+Debug snapshots are stored for 15 days. This retention policy is set on a per-application basis. If you need to increase this value, you can request an increase by opening a support case in the Azure portal.
+
+## Enable Application Insights Snapshot Debugger for your application
+Snapshot collection is available for:
+* .NET Framework and ASP.NET applications running .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or later.
+* .NET Core and ASP.NET Core applications running .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) on Windows.
+* .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications on Windows.
+
+We don't recommend using .NET Core versions prior to LTS since they're out of support.
+
+The following environments are supported:
+
+* [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running OS family 4 or later
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running on Windows Server 2012 R2 or later
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later or Windows 8.1 or later
+
+> [!NOTE]
+> Client applications (for example, WPF, Windows Forms or UWP) are not supported.
+
+If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Troubleshooting guide](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+
+## Grant permissions
+
+Access to snapshots is protected by Azure role-based access control (Azure RBAC). To inspect a snapshot, you must first be added to the necessary role by a subscription owner.
+
+> [!NOTE]
+> Owners and contributors do not automatically have this role. If they want to view snapshots, they must add themselves to the role.
+
+Subscription owners should assign the `Application Insights Snapshot Debugger` role to users who will inspect snapshots. This role can be assigned to individual users or groups by subscription owners for the target Application Insights resource or its resource group or subscription.
+
+1. Assign the Debugger role to the **Application Insights Snapshot**.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
++
+> [!IMPORTANT]
+> Please note that snapshots may contain personal data or other sensitive information in variable and parameter values. Snapshot data is stored in the same region as your App Insights resource.
+
+## View Snapshots in the Portal
+
+After an exception has occurred in your application and a snapshot has been created, you should have snapshots to view. It can take 5 to 10 minutes from an exception occurring to a snapshot ready and viewable from the portal. To view snapshots, in the **Failure** pane, select the **Operations** button when viewing the **Operations** tab, or select the **Exceptions** button when viewing the **Exceptions** tab:
+
+![Failures Page](./media/snapshot-debugger/failures-page.png)
+
+Select an operation or exception in the right pane to open the **End-to-End Transaction Details** pane, then select the exception event. If a snapshot is available for the given exception, an **Open Debug Snapshot** button appears on the right pane with details for the [exception](../app/asp-net-exceptions.md).
+
+![Open Debug Snapshot button on exception](./media/snapshot-debugger/e2e-transaction-page.png)
+
+In the Debug Snapshot view, you see a call stack and a variables pane. When you select frames of the call stack in the call stack pane, you can view local variables and parameters for that function call in the variables pane.
+
+![View Debug Snapshot in the portal](./media/snapshot-debugger/open-snapshot-portal.png)
+
+Snapshots might include sensitive information, and by default they aren't viewable. To view snapshots, you must have the `Application Insights Snapshot Debugger` role assigned to you.
+
+## View Snapshots in Visual Studio 2017 Enterprise or above
+1. Click the **Download Snapshot** button to download a `.diagsession` file, which can be opened by Visual Studio Enterprise.
+
+2. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you'll need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger).
+
+3. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown so that you can debug the current state of the process.
+
+ ![View debug snapshot in Visual Studio](./media/snapshot-debugger/open-snapshot-visual-studio.png)
+
+The downloaded snapshot includes any symbol files that were found on your web application server. These symbol files are required to associate snapshot data with source code. For App Service apps, make sure to enable symbol deployment when you publish your web apps.
+
+## How snapshots work
+
+The Snapshot Collector is implemented as an [Application Insights Telemetry Processor](../app/configuration-with-applicationinsights-config.md#telemetry-processors-aspnet). When your application runs, the Snapshot Collector Telemetry Processor is added to your application's telemetry pipeline.
+Each time your application calls [TrackException](../app/asp-net-exceptions.md#exceptions), the Snapshot Collector computes a Problem ID from the type of exception being thrown and the throwing method.
+Each time your application calls TrackException, a counter is incremented for the appropriate Problem ID. When the counter reaches the `ThresholdForSnapshotting` value, the Problem ID is added to a Collection Plan.
+
+The Snapshot Collector also monitors exceptions as they're thrown by subscribing to the [AppDomain.CurrentDomain.FirstChanceException](/dotnet/api/system.appdomain.firstchanceexception) event. When that event fires, the Problem ID of the exception is computed and compared against the Problem IDs in the Collection Plan.
+If there's a match, then a snapshot of the running process is created. The snapshot is assigned a unique identifier and the exception is stamped with that identifier. After the FirstChanceException handler returns, the thrown exception is processed as normal. Eventually, the exception reaches the TrackException method again where it, along with the snapshot identifier, is reported to Application Insights.
+
+The main process continues to run and serve traffic to users with little interruption. Meanwhile, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader creates a minidump and uploads it to Application Insights along with any relevant symbol (.pdb) files.
+
+> [!TIP]
+> - A process snapshot is a suspended clone of the running process.
+> - Creating the snapshot takes about 10 to 20 milliseconds.
+> - The default value for `ThresholdForSnapshotting` is 1. This is also the minimum value. Therefore, your app has to trigger the same exception **twice** before a snapshot is created.
+> - Set `IsEnabledInDeveloperMode` to true if you want to generate snapshots while debugging in Visual Studio.
+> - The snapshot creation rate is limited by the `SnapshotsPerTenMinutesLimit` setting. By default, the limit is one snapshot every ten minutes.
+> - No more than 50 snapshots per day may be uploaded.
+
+## Limitations
+
+The default data retention period is 15 days. For each Application Insights instance, a maximum number of 50 snapshots are allowed per day.
+
+### Publish symbols
+The Snapshot Debugger requires symbol files on the production server to decode variables and to provide a debugging experience in Visual Studio.
+Version 15.2 (or above) of Visual Studio 2017 publishes symbols for release builds by default when it publishes to App Service. In prior versions, you need to add the following line to your publish profile `.pubxml` file so that symbols are published in release mode:
+
+```xml
+ <ExcludeGeneratedDebugSymbol>False</ExcludeGeneratedDebugSymbol>
+```
+
+For Azure Compute and other types, make sure that the symbol files are in the same folder of the main application .dll (typically, `wwwroot/bin`) or are available on the current path.
+
+> [!NOTE]
+> For more information on the different symbol options that are available, see the [Visual Studio documentation](/visualstudio/ide/reference/advanced-build-settings-dialog-box-csharp?view=vs-2019&preserve-view=true#output
+). For best results, we recommend using "Full", "Portable" or "Embedded".
+
+### Optimized builds
+In some cases, local variables can't be viewed in release builds because of optimizations that are applied by the JIT compiler.
+However, in Azure App Services, the Snapshot Collector can deoptimize throwing methods that are part of its Collection Plan.
+
+> [!TIP]
+> Install the Application Insights Site Extension in your App Service to get deoptimization support.
+
+## Next steps
+Enable Application Insights Snapshot Debugger for your application:
+
+* [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+
+Beyond Application Insights Snapshot Debugger:
+
+* [Set snappoints in your code](/visualstudio/debugger/debug-live-azure-applications) to get snapshots without waiting for an exception.
+* [Diagnose exceptions in your web apps](../app/asp-net-exceptions.md) explains how to make more exceptions visible to Application Insights.
+* [Smart Detection](../app/proactive-diagnostics.md) automatically discovers performance anomalies.
azure-monitor Workbooks Honey Comb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-honey-comb.md
The image below shows the CPU utilization of virtual machines across two subscr
| where CounterName == 'Available MBytes' | summarize CounterValue = avg(CounterValue) by Computer, _ResourceId | extend ResourceGroup = extract(@'/subscriptions/.+/resourcegroups/(.+)/providers/microsoft.compute/virtualmachines/.+', 1, _ResourceId)
-| extend ResourceGroup = iff(ResourceGroup == '', 'On-premise computers', ResourceGroup), Id = strcat(_ResourceId, '::', Computer)
+| extend ResourceGroup = iff(ResourceGroup == '', 'On-premises computers', ResourceGroup), Id = strcat(_ResourceId, '::', Computer)
``` 5. Run query.
The image below shows the CPU utilization of virtual machines across two subscr
| `Heatmap` | In this type, the cells are colored based on a metric column and a color palette. This provides a simple way to highlight metrics spreads across cells. | | `Thresholds` | In this type, cell colors are set by threshold rules (for example, _CPU > 90% => Red, 60% > CPU > 90% => Yellow, CPU < 60% => Green_) | | `Field Based` | In this type, a column provides specific RGB values to use for the node. Provides the most flexibility but usually requires more work to enable. |
-
+ ## Node format settings Honey comb authors can specify what content goes to the different parts of a node: top, left, center, right, and bottom. Authors are free to use any of the renderers workbooks supports (text, big number, spark lines, icon, etc.).
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md
Title: Troubleshoot Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs
-description: Provides troubleshooting content for using the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
+ Title: Troubleshoot Azure Application Consistent Snapshot tool - Azure NetApp Files
+description: Troubleshoot communication issues, test failures, and other SAP HANA issues when using the Azure Application Consistent Snapshot (AzAcSnap) tool.
documentationcenter: ''
na Previously updated : 05/17/2021 Last updated : 06/13/2022 +
-# Troubleshoot Azure Application Consistent Snapshot tool
+# Troubleshoot the Azure Application Consistent Snapshot (AzAcSnap) tool
-This article provides troubleshooting content for using the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files and Azure Large Instance.
+This article describes how to troubleshoot issues when using the Azure Application Consistent Snapshot (AzAcSnap) tool for Azure NetApp Files and Azure Large Instance.
-The following are common issues that you may encounter while running the commands. Follow the resolution instructions mentioned to fix the issue. If you still encounter an issue, open a Service Request from Azure portal and assign the request into the SAP HANA Large Instance queue for Microsoft Support to respond.
+You might encounter several common issues when running AzAcSnap commands. Follow the instructions to troubleshoot the issues. If you still have issues, open a Service Request for Microsoft Support from the Azure portal and assign the request to the SAP HANA Large Instance queue.
-## Log files
+## Check log files, result files, and syslog
-One of the best sources of information for debugging any errors with AzAcSnap are the log files.
+Some of the best sources of information for investigating AzAcSnap issues are the log files, result files, and the system log.
-### Log file location
+### Log files
-The log files are stored in the directory configured per the `logPath` parameter in the AzAcSnap configuration file. The default configuration filename is `azacsnap.json` and the default value for `logPath` is `"./logs"` which means the log files are written into the `./logs` directory relative to where the `azacsnap` command is run. Making the `logPath` an absolute location (e.g. `/home/azacsnap/logs`) will ensure `azacsnap` outputs the logs into `/home/azacsnap/logs` irrespective of where the `azacsnap` command was run.
+The AzAcSnap log files are stored in the directory configured by the `logPath` parameter in the AzAcSnap configuration file. The default configuration filename is *azacsnap.json*, and the default value for `logPath` is *./logs*, which means the log files are written into the *./logs* directory relative to where the `azacsnap` command runs. If you make the `logPath` an absolute location, such as */home/azacsnap/logs*, `azacsnap` always outputs the logs into */home/azacsnap/logs*, regardless of where you run the `azacsnap` command.
-### Log file naming
+The log filename is based on the application name, `azacsnap`, the command run with `-c`, such as `backup`, `test`, or `details`, and the default configuration filename, such as *azacsnap.json*. With the `-c backup` command, a default log filename would be *azacsnap-backup-azacsnap.log*, written into the directory configured by `logPath`.
-The log filename is based on the application name (e.g. `azacsnap`), the command option (`-c`) used (e.g. `backup`, `test`, `details`, etc.) and the configuration filename (e.g. default = `azacsnap.json`). So if using the `-c backup` command, the log filename by default would be `azacsnap-backup-azacsnap.log` and is written into the directory configured by `logPath`.
+This naming convention allows for multiple configuration files, one per database, to help locate the associated log files. If the configuration filename is *SID.json*, then the log filename when using the `azacsnap -c backup --configfile SID.json` option is *azacsnap-backup-SID.log*.
-This naming convention was established to allow for multiple configuration files, one per database, and ensure ease of locating the associated logfiles. Therefore, if the configuration filename is `SID.json`, then the result filename when using the `azacsnap -c backup --configfile SID.json` options will be `azacsnap-backup-SID.log`.
+### Result files and syslog
-### Result file and syslog
+For the `-c backup` command, AzAcSnap writes to a *\*.result* file and to the system log, `/var/log/messages`, by using the `logger` command. The *\*.result* filename has the same base name as the log file, and goes into the same location. The *\*.result* file is a simple one line output file, such as the following example:
-For the `-c backup` command option AzAcSnap writes out to a `*.result` file and the system log (`/var/log/messages`) using the `logger` command. The `*.result` filename has the same base name as the [log file](#log-file-naming) and goes into the same [location as the log file](#log-file-location). It is a simple one line output file per the following examples.
-
-Example output from `*.result` file.
```output Database # 1 (PR1) : completed ok ```
-Example output from `/var/log/messages` file.
+Here's example output from `/var/log/messages`:
+ ```output Dec 17 09:01:13 azacsnap-rhel azacsnap: Database # 1 (PR1) : completed ok ```
-## Failed communication with Azure NetApp Files
+## Troubleshoot failed 'test storage' command
-When validating communication with Azure NetApp Files, communication might fail or time-out. Check to ensure firewall rules are not blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:-
+The command `azacsnap -c test --test storage` might not complete successfully.
-- (https://)management.azure.com:443-- (https://)login.microsoftonline.com:443
+### Check network firewalls
-### Testing communication using Cloud Shell
+Communication with Azure NetApp Files might fail or time out. To troubleshoot, make sure firewall rules aren't blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:
-You can test the Service Principal is configured correctly by using Cloud Shell through your Azure Portal. This will test that the configuration is correct bypassing network controls within a VNet or virtual machine.
+- `https://management.azure.com:443`
+- `https://login.microsoftonline.com:443`
-**Solution:**
+### Use Cloud Shell to validate configuration files
-1. Open a [Cloud Shell](../cloud-shell/overview.md) session in your Azure Portal.
-1. Make a test directory (e.g. `mkdir azacsnap`)
-1. cd to the azacsnap directory and download the latest version of azacsnap tool.
-
- ```bash
- wget https://aka.ms/azacsnapinstaller
- ```
+You can test whether the service principal is configured correctly by using Cloud Shell through the Azure portal. Using Cloud Shell tests for correct configuration, bypassing network controls within a virtual network or virtual machine (VM).
+
+1. In the Azure portal, open a [Cloud Shell](../cloud-shell/overview.md) session.
+1. Make a test directory, for example `mkdir azacsnap`.
+1. Switch to the *azacsnap* directory, and download the latest version of AzAcSnap.
- ```output
- -<snip>-
- HTTP request sent, awaiting response... 200 OK
- Length: 24402411 (23M) [application/octet-stream]
- Saving to: ΓÇÿazacsnapinstallerΓÇÖ
-
- azacsnapinstaller 100%[=================================================================================>] 23.27M 5.94MB/s in 5.3s
-
- 2021-09-02 23:46:18 (4.40 MB/s) - ΓÇÿazacsnapinstallerΓÇÖ saved [24402411/24402411]
- ```
-
-1. Make the installer executable. (e.g. `chmod +x azacsnapinstaller`)
+ ```bash
+ wget https://aka.ms/azacsnapinstaller
+ ```
+1. Make the installer executable, for example `chmod +x azacsnapinstaller`.
1. Extract the binary for testing.
- ```bash
- ./azacsnapinstaller -X -d .
- ```
-
- ```output
- +--+
- | Azure Application Consistent Snapshot Tool Installer |
- +--+
- |-> Installer version '5.0.2_Build_20210827.19086'
- |-> Extracting commands into ..
- |-> Cleaning up .NET extract dir
- ```
+ ```bash
+ ./azacsnapinstaller -X -d .
+ ```
+ The results look like the following output:
-1. Using the Cloud Shell Upload/Download icon, upload the Service Principal file (e.g. `azureauth.json`) and the AzAcSnap configuration file for testing (e.g. `azacsnap.json`)
-1. Run the Storage test from the Azure Cloud Shell console.
+ ```output
+ +--+
+ | Azure Application Consistent Snapshot Tool Installer |
+ +--+
+ |-> Installer version '5.0.2_Build_20210827.19086'
+ |-> Extracting commands into ..
+ |-> Cleaning up .NET extract dir
+ ```
- > [!NOTE]
- > The test command can take about 90 seconds to complete.
+1. Use the Cloud Shell Upload/Download icon to upload the service principal file, *azureauth.json*, and the AzAcSnap configuration file, such as *azacsnap.json*, for testing.
+1. Run the `storage` test.
- ```bash
- ./azacsnap -c test --test storage
- ```
+ ```bash
+ ./azacsnap -c test --test storage
+ ```
- ```output
- BEGIN : Test process started for 'storage'
- BEGIN : Storage test snapshots on 'data' volumes
- BEGIN : 1 task(s) to Test Snapshots for Storage Volume Type 'data'
- PASSED: Task#1/1 Storage test successful for Volume
- END : Storage tests complete
- END : Test process complete for 'storage'
- ```
+ > [!NOTE]
+ > The test command can take about 90 seconds to complete.
-## Problems with SAP HANA
+### Failed test on Azure Large Instance
-### Running the test command fails
+The following error example is from running `azacsnap` on Azure Large Instance:
-When validating communication with SAP HANA by running a test with `azacsnap -c test --test hana` and it provides the following error:
+```bash
+azacsnap -c test --test storage
+```
```output
-> azacsnap -c test --test hana
-BEGIN : Test process started for 'hana'
-BEGIN : SAP HANA tests
-CRITICAL: Command 'test' failed with error:
-Cannot get SAP HANA version, exiting with error: 127
+The authenticity of host '172.18.18.11 (172.18.18.11)' can't be established.
+ECDSA key fingerprint is SHA256:QxamHRn3ZKbJAKnEimQpVVCknDSO9uB4c9Qd8komDec.
+Are you sure you want to continue connecting (yes/no)?
```
-**Solution:**
+To troubleshoot this error, don't respond `yes`. Make sure that your storage IP address is correct. You can confirm the storage IP address with the Microsoft operations team.
-1. Check the configuration file (for example, `azacsnap.json`) for each HANA instance to ensure the SAP HANA database values are correct.
-1. Try to run the command below to verify if the `hdbsql` command is in the path and it can connect to the SAP HANA Server. The following example shows the correct running of the command and its output.
+The error usually appears when the Azure Large Instance storage user doesn't have access to the underlying storage. To determine whether the storage user has access to storage, run the `ssh` command to validate communication with the storage platform.
- ```bash
- hdbsql -n 172.18.18.50 - i 00 -d SYSTEMDB -U AZACSNAP "\s"
- ```
+```bash
+ssh <StorageBackupname>@<Storage IP address> "volume show -fields volume"
+```
- ```output
- host : 172.18.18.50
- sid : H80
- dbname : SYSTEMDB
- user : AZACSNAP
- kernel version: 2.00.040.00.1553674765
- SQLDBC version: libSQLDBCHDB 2.04.126.1551801496
- autocommit : ON
- locale : en_US.UTF-8
- input encoding: UTF8
- sql port : saphana1:30013
- ```
+The following example shows the expected output:
- In this example, the `hdbsql` command isn't in the users `$PATH`.
+```bash
+ssh clt1h80backup@10.8.0.16 "volume show -fields volume"
+```
- ```bash
- hdbsql -n 172.18.18.50 - i 00 -U AZACSNAP "select version from sys.m_database"
- ```
+```output
+vserver volume
+
+osa33-hana-c01v250-client25-nprod hana_data_h80_mnt00001_t020_vol
+osa33-hana-c01v250-client25-nprod hana_data_h80_mnt00002_t020_vol
+```
- ```output
- If 'hdbsql' is not a typo you can use command-not-found to lookup the package that contains it, like this:
- cnf hdbsql
- ```
+### Failed test with Azure NetApp Files
- In this example, the `hdbsql` command is temporarily added to the user's `$PATH`, but when run shows the connection key hasn't been set up correctly with the `hdbuserstore Set` command (refer to Getting Started guide for details):
+The following error example is from running `azacsnap` with Azure NetApp Files:
- ```bash
- export PATH=$PATH:/hana/shared/H80/exe/linuxx86_64/hdb/
- ```
+```bash
+azacsnap --configfile azacsnap.json.NOT-WORKING -c test --test storage
+```
- ```bash
- hdbsql -n 172.18.18.50 -i 00 -U AZACSNAP "select version from sys.m_database"
- ```
+```output
+BEGIN : Test process started for 'storage'
+BEGIN : Storage test snapshots on 'data' volumes
+BEGIN : 1 task(s) to Test Snapshots for Storage Volume Type 'data'
+ERROR: Could not create StorageANF object [authFile = 'azureauth.json']
+```
- ```output
- * -10104: Invalid value for KEY (AZACSNAP)
- ```
+To troubleshoot this error:
- > [!NOTE]
- > To permanently add to the user's `$PATH`, update the user's `$HOME/.profile` file
+1. Check for the existence of the service principal file, *azureauth.json*, as set in the *azacsnap.json* configuration file.
+1. Check the log file, for example, *logs/azacsnap-test-azacsnap.log*, to see if the service principal file has the correct content. The following log file output shows that the client secret key is invalid.
-### Insufficient privilege
+ ```output
+ [19/Nov/2020:18:39:49 +13:00] DEBUG: [PID:0020080:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000215: Invalid client secret is provided.
+ ```
-If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check to ensure the appropriate privilege has been asssigned to the "AZACSNAP" database user (assuming this is the user created per the [installation guide](azacsnap-installation.md#enable-communication-with-database)). Verify the user's current privilege with the following command:
+1. Check the log file to see if the service principal has expired. The following log file example shows that the client secret keys are expired.
-```bash
-hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges " | grep -i -e GRANTEE -e azacsnap
-```
+ ```output
+ [19/Nov/2020:18:41:10 +13:00] DEBUG: [PID:0020257:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://docs.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials
+ ```
-```output
-GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE
-"AZACSNAP","USER","BACKUP ADMIN","TRUE","FALSE"
-"AZACSNAP","USER","CATALOG READ","TRUE","FALSE"
-"AZACSNAP","USER","CREATE ANY","TRUE","TRUE"
-```
+## Troubleshoot failed 'test hana' command
-The error might also provide further information to help determine the required SAP HANA privileges, such as the output of `Detailed info for this error can be found with guid '99X9999X99X9999X99X99XX999XXX999' SQLSTATE: HY000`. In this case follow SAP's instructions at [SAP Help Portal - GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.05/en-US/9a73c4c017744288b8d6f3b9bc0db043.html) which recommends using the following SQL query to determine the detail on the required privilege.
+The command `azacsnap -c test --test hana` might not complete successfully.
-```sql
-CALL SYS.GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS ('99X9999X99X9999X99X99XX999XXX999', ?)
-```
+### Command not found
-```output
-GUID,CREATE_TIME,CONNECTION_ID,SESSION_USER_NAME,CHECKED_USER_NAME,PRIVILEGE,IS_MISSING_ANALYTIC_PRIVILEGE,IS_MISSING_GRANT_OPTION,DATABASE_NAME,SCHEMA_NAME,OBJECT_NAME,OBJECT_TYPE
-"99X9999X99X9999X99X99XX999XXX999","2021-01-01 01:00:00.180000000",120212,"AZACSNAP","AZACSNAP","DATABASE ADMIN or DATABASE BACKUP ADMIN","FALSE","FALSE","","","",""
-```
+When setting up communication with SAP HANA, the `hdbuserstore` program is used to create the secure communication settings. AzAcSnap also requires the `hdbsql` program for all communications with SAP HANA. These programs are usually under */usr/sap/\<SID>/SYS/exe/hdb/* or */usr/sap/hdbclient* and must be in the users `$PATH`.
-In the example above, adding the 'DATABASE BACKUP ADMIN' privilege to the SYSTEMDB's AZACSNAP user, should resolve the insufficient privilege error.
+- In the following example, the `hdbsql` command isn't in the users `$PATH`.
-### The `hdbuserstore` location
+ ```bash
+ hdbsql -n 172.18.18.50 - i 00 -U AZACSNAP "select version from sys.m_database"
+ ```
-When setting up communication with SAP HANA, the `hdbuserstore` program is used to create the secure communication settings. The `hdbuserstore` program is usually found under `/usr/sap/<SID>/SYS/exe/hdb/` or `/usr/sap/hdbclient`. Normally the installer adds the correct location to the `azacsnap` user's `$PATH`.
+ ```output
+ If 'hdbsql' is not a typo you can use command-not-found to lookup the package that contains it, like this:
+ cnf hdbsql
+ ```
-## Failed test with storage
+- The following example temporarily adds the `hdbsql` command to the user's `$PATH`, allowing `azacsnap` to run correctly.
-The command `azacsnap -c test --test storage` does not complete successfully.
+ ```bash
+ export PATH=$PATH:/hana/shared/H80/exe/linuxx86_64/hdb/
+ ```
-### Azure Large Instance
+Make sure the installer added the location of these files to the AzAcSnap user's `$PATH`.
-The following example is from running `azacsnap` on SAP HANA on Azure Large Instance:
+> [!NOTE]
+> To permanently add to the user's `$PATH`, update the user's *$HOME/.profile* file.
-```bash
-azacsnap -c test --test storage
-```
+### Invalid value for key
-```output
-The authenticity of host '172.18.18.11 (172.18.18.11)' can't be established.
-ECDSA key fingerprint is SHA256:QxamHRn3ZKbJAKnEimQpVVCknDSO9uB4c9Qd8komDec.
-Are you sure you want to continue connecting (yes/no)?
-```
+This command output shows that the connection key hasn't been set up correctly with the `hdbuserstore Set` command.
-**Solution:** The above error normally shows up when Azure Large Instance storage user has no access to the underlying storage. To validate access to storage with the provided storage user, run the `ssh`
-command to validate communication with the storage platform.
+ ```bash
+ hdbsql -n 172.18.18.50 -i 00 -U AZACSNAP "select version from sys.m_database"
+ ```
-```bash
-ssh <StorageBackupname>@<Storage IP address> "volume show -fields volume"
-```
+ ```output
+ * -10104: Invalid value for KEY (AZACSNAP)
+ ```
-An example with expected output:
+For more information on setup of the `hdbuserstore`, see [Get started with AzAcSnap](azacsnap-get-started.md).
-```bash
-ssh clt1h80backup@10.8.0.16 "volume show -fields volume"
-```
+### Failed test
+
+When validating communication with SAP HANA by running a test with `azacsnap -c test --test hana`, you might get the following error:
```output
-vserver volume
-
-osa33-hana-c01v250-client25-nprod hana_data_h80_mnt00001_t020_vol
-osa33-hana-c01v250-client25-nprod hana_data_h80_mnt00002_t020_vol
+> azacsnap -c test --test hana
+BEGIN : Test process started for 'hana'
+BEGIN : SAP HANA tests
+CRITICAL: Command 'test' failed with error:
+Cannot get SAP HANA version, exiting with error: 127
```
-#### The authenticity of host '172.18.18.11 (172.18.18.11)' can't be established
+To troubleshoot this error:
-```bash
-azacsnap -c test --test storage
-```
+1. Check the configuration file, for example *azacsnap.json*, for each HANA instance, to ensure that the SAP HANA database values are correct.
+1. Run the following command to verify that the `hdbsql` command is in the path and that it can connect to the SAP HANA server.
-```output
-BEGIN : Test process started for 'storage'
-BEGIN : Storage test snapshots on 'data' volumes
-BEGIN : 1 task(s) to Test Snapshots for Storage Volume Type 'data'
-The authenticity of host '10.3.0.18 (10.3.0.18)' can't be established.
-ECDSA key fingerprint is SHA256:cONAr0lpafb7gY4l31AdWTzM3s9LnKDtpMdPA+cxT7Y.
-Are you sure you want to continue connecting (yes/no)?
-```
+ ```bash
+ hdbsql -n 172.18.18.50 - i 00 -d SYSTEMDB -U AZACSNAP "\s"
+ ```
+
+ The following example shows the output when the command runs correctly:
-**Solution:** Do not select Yes. Ensure that your storage IP address is correct. If there is still an
-issue, confirm the storage IP address with Microsoft operations team.
+ ```output
+ host : 172.18.18.50
+ sid : H80
+ dbname : SYSTEMDB
+ user : AZACSNAP
+ kernel version: 2.00.040.00.1553674765
+ SQLDBC version: libSQLDBCHDB 2.04.126.1551801496
+ autocommit : ON
+ locale : en_US.UTF-8
+ input encoding: UTF8
+ sql port : saphana1:30013
+ ```
-### Azure NetApp Files
+### Insufficient privilege error
-The following example is from running `azacsnap` on a VM using Azure NetApp Files:
+If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check that the user has the appropriate AZACSNAP database user privileges set up per the [installation guide](azacsnap-installation.md#enable-communication-with-database). Verify the user's privileges with the following command:
```bash
-azacsnap --configfile azacsnap.json.NOT-WORKING -c test --test storage
+hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges " | grep -i -e GRANTEE -e azacsnap
```
+The command should return the following output:
+ ```output
-BEGIN : Test process started for 'storage'
-BEGIN : Storage test snapshots on 'data' volumes
-BEGIN : 1 task(s) to Test Snapshots for Storage Volume Type 'data'
-ERROR: Could not create StorageANF object [authFile = 'azureauth.json']
+GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE
+"AZACSNAP","USER","BACKUP ADMIN","TRUE","FALSE"
+"AZACSNAP","USER","CATALOG READ","TRUE","FALSE"
+"AZACSNAP","USER","CREATE ANY","TRUE","TRUE"
```
-**Solution:**
-
-1. Check for the existence of the Service Principal file, `azureauth.json`, as set in the `azacsnap.json` configuration file.
-1. Check the log file (for example, `logs/azacsnap-test-azacsnap.log`) to see if the Service Principal (`azureauth.json`) has the correct content. Example from log as follows:
+The error might provide further information to help determine the required SAP HANA privileges, such as `Detailed info for this error can be found with guid '99X9999X99X9999X99X99XX999XXX999' SQLSTATE: HY000`. In this case, follow the instructions at [SAP Help Portal - GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.05/en-US/9a73c4c017744288b8d6f3b9bc0db043.html), which recommend using the following SQL query to determine the details of the required privilege:
- ```output
- [19/Nov/2020:18:39:49 +13:00] DEBUG: [PID:0020080:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000215: Invalid client secret is provided.
- ```
+```sql
+CALL SYS.GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS ('99X9999X99X9999X99X99XX999XXX999', ?)
+```
-1. Check the log file (for example, `logs/azacsnap-test-azacsnap.log`) to see if the Service Principal (`azureauth.json`) has expired. Example from log as follows:
+```output
+GUID,CREATE_TIME,CONNECTION_ID,SESSION_USER_NAME,CHECKED_USER_NAME,PRIVILEGE,IS_MISSING_ANALYTIC_PRIVILEGE,IS_MISSING_GRANT_OPTION,DATABASE_NAME,SCHEMA_NAME,OBJECT_NAME,OBJECT_TYPE
+"99X9999X99X9999X99X99XX999XXX999","2021-01-01 01:00:00.180000000",120212,"AZACSNAP","AZACSNAP","DATABASE ADMIN or DATABASE BACKUP ADMIN","FALSE","FALSE","","","",""
+```
- ```output
- [19/Nov/2020:18:41:10 +13:00] DEBUG: [PID:0020257:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://docs.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials
- ```
+In the preceding example, adding the `DATABASE BACKUP ADMIN` privilege to the SYSTEMDB's AZACSNAP user should resolve the insufficient privilege error.
## Next steps -- [Tips](azacsnap-tips.md)
+- [Tips and tricks for using AzAcSnap](azacsnap-tips.md)
+- [AzAcSnap command reference](azacsnap-cmd-ref-configure.md)
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This article provides references to best practices that can help you understand
The following diagram summarizes the categories of solution architectures that Azure NetApp Files offers:
-![Solution architecture categories](../media/azure-netapp-files/solution-architecture-categories.png)
## Linux OSS Apps and Database solutions
azure-percept Overview Ai Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-ai-models.md
With pre-trained models, no coding or training data collection is required. Simp
## Reference solutions
-A people counting reference solution is also available. This reference solution is an open-source AI application providing edge-based people counting with user-defined zone entry/exit events. Video and AI output from the on-premise edge device is egressed to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/), with the user interface running as an Azure website. AI inferencing is provided by an open-source AI model for people detection.
+A people counting reference solution is also available. This reference solution is an open-source AI application providing edge-based people counting with user-defined zone entry/exit events. Video and AI output from the on-premises edge device is egressed to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/), with the user interface running as an Azure website. AI inferencing is provided by an open-source AI model for people detection.
:::image type="content" source="./media/overview-ai-models/people-detector.gif" alt-text="Spatial analytics pre-built solution gif.":::
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | communicationservices | Yes | Yes | No |
+> | communicationservices | Yes | Yes <br/><br/> Note that resources with attached phone numbers cannot be moved to subscriptions in different data locations, nor subscriptions that do not support having phone numbers. | No |
## Microsoft.Compute
azure-signalr Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-metrics.md
description: Metrics in Azure SignalR Service.
Previously updated : 04/08/2022 Last updated : 06/03/2022 # Metrics in Azure SignalR Service
-Azure SignalR Service has some built-in metrics and you and sets up [alerts](../azure-monitor/alerts/alerts-overview.md) and [autoscale](./signalr-howto-scale-autoscale.md) base on metrics.
+Metrics in SignalR Service is an implementation of [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). Understanding how Azure Monitor collects and displays metrics is helpful for using metrics in SignalR Service. Azure SignalR Service defines a collection of metrics that can be used to set up [alerts](../azure-monitor/alerts/alerts-overview.md) and [autoscale conditions](./signalr-howto-scale-autoscale.md).
-## Understand metrics
+## SignalR Service metrics
-Metrics provide the running info of the service. The available metrics are:
-
-> [!CAUTION]
-> The aggregation type "Count" is meaningless for all the metrics. Please DO NOT use it.
+Metrics provide insights into the operational state of the service. The available metrics are:
|Metric|Unit|Recommended Aggregation Type|Description|Dimensions| ||||||
-|Connection Close Count|Count|Sum|The count of connections closed by various reasons.|Endpoint, ConnectionCloseCategory|
-|Connection Count|Count|Max / Avg|The amount of connection.|Endpoint|
-|Connection Open Count|Count|Sum|The count of new connections opened.|Endpoint|
-|Connection Quota Utilization|Percent|Max / Avg|The percentage of connection connected relative to connection quota.|No Dimensions|
-|Inbound Traffic|Bytes|Sum|The inbound traffic of service|No Dimensions|
-|Message Count|Count|Sum|The total amount of messages.|No Dimensions|
-|Outbound Traffic|Bytes|Sum|The outbound traffic of service|No Dimensions|
-|System Errors|Percent|Avg|The percentage of system errors|No Dimensions|
-|User Errors|Percent|Avg|The percentage of user errors|No Dimensions|
-|Server Load|Percent|Max / Avg|The percentage of server load|No Dimensions|
+|**Connection Close Count**|Count|Sum|The count of connections closed for various reasons; see ConnectionCloseCategory for details.|Endpoint, ConnectionCloseCategory|
+|**Connection Count**|Count|Max or Avg|The number of connections.|Endpoint|
+|**Connection Open Count**|Count|Sum|The count of new connections opened.|Endpoint|
+|**Connection Quota Utilization**|Percent|Max or Avg|The percentage of connections to the server relative to the available quota.|No Dimensions|
+|**Inbound Traffic**|Bytes|Sum|The volume of inbound traffic to the service.|No Dimensions|
+|**Message Count**|Count|Sum|The total number of messages.|No Dimensions|
+|**Outbound Traffic**|Bytes|Sum|The volume of outbound traffic from the service.|No Dimensions|
+|**System Errors**|Percent|Avg|The percentage of system errors.|No Dimensions|
+|**User Errors**|Percent|Avg|The percentage of user errors.|No Dimensions|
+|**Server Load**|Percent|Max or Avg|The percentage of server load.|No Dimensions|
+
+> [!NOTE]
+> The aggregation type **Count** is the count of sampling data received. Count is defined as a general metrics aggregation type and can't be excluded from the list of available aggregation types. It's not generally useful for SignalR Service but it can sometimes be used to check if the sampling data has been sent to metrics.
+
+### Metrics dimensions
+
+A *dimension* is a name-value pair with extra data to describe the metric value. Some metrics don't have dimensions; others have multiple dimensions.
-### Understand Dimensions
+The following two sections describe the dimensions available in SignalR Service metrics.
-Dimensions of a metric are name/value pairs that carry extra data to describe the metric value.
+#### Endpoint
-The dimensions available in some metrics:
+Describes the type of connection. Includes dimension values: **Client**, **Server**, and **LiveTrace**.
-* Endpoint: Describe the type of connection. Including dimension values: Client, Server, LiveTrace
-* ConnectionCloseCategory: Describe the categories of why connection getting closed. Including dimension values:
- - Normal: Normal closure.
- - Throttled: With (Message count/rate or connection) throttling, check Connection Count and Message Count current usage and your resource limits.
- - PingTimeout: Connection ping timeout.
- - NoAvailableServerConnection: Client connection cannot be established (won't even pass handshake) as no available server connection.
- - InvokeUpstreamFailed: Upstream invoke failed.
- - SlowClient: Too many messages queued up at service side, which needed to be sent.
- - HandshakeError: Terminate connection in handshake phase, could be caused by the remote party closed the WebSocket connection without completing the close handshake. Mostly, it's caused by network issue. Otherwise, please check if the client is able to create websocket connection due to some browser settings.
- - ServerConnectionNotFound: Target hub server not available. Nothing need to be done for improvement, this is by-design and reconnection should be done after this drop.
- - ServerConnectionClosed: Client connection aborted because the corresponding server connection is dropped. When app server uses Azure SignalR Service SDK, in the background, it initiates server connections to the remote Azure SignalR service. Each client connection to the service is associated with one of the server connections to route traffic between client and app server. Once a server connection is closed, all the client connections it serves will be closed with ServerConnectionDropped message.
- - ServiceTransientError: Internal server error
- - BadRequest: This caused by invalid hub name, wrong payload, etc.
- - ClosedByAppServer: App server asks the service to close the client.
- - ServiceReload: This is triggered when a connection is dropped due to an internal service component reload. This event does not indicate a malfunction and is part of normal service operation.
- - ServiceModeSwitched: Connection closed after service mode switched like from serverless mode to default mode
- - Unauthorized: The connection is unauthorized
+#### ConnectionCloseCategory
-Learn more about [multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics)
+Gives the reason for closing the connection. Includes the following dimension values.
-### Understand the minimum grain of message count
+| Value | Description |
+||--|
+| **Normal** | Connection closed normally.|
+|**Throttled**|With (Message count/rate or connection) throttling, check Connection Count and Message Count current usage and your resource limits.|
+|**PingTimeout**|Connection ping timeout.|
+|**NoAvailableServerConnection**|Client connection can't be established (won't even pass handshake) because there's no available server connection.|
+|**InvokeUpstreamFailed**|Upstream invoke failed.|
+|**SlowClient**|Too many unsent messages queued up at service side.|
+|**HandshakeError**|Connection terminated in the handshake phase, could be caused by the remote party closing the WebSocket connection without completing the close handshake. HandshakeError is caused by a network issue. Check browser settings to see if the client is able to create a websocket connection.|
+|**ServerConnectionNotFound**|Target hub server not available. Nothing need to be done for improvement, it's by design and reconnection should be done after this drop.|
+|**ServerConnectionClosed**|Client connection closed because the corresponding server connection was dropped. When app server uses Azure SignalR Service SDK, in the background, it initiates server connections to the remote Azure SignalR Service. Each client connection to the service is associated with one of the server connections to route traffic between the client and app server. Once a server connection is closed, all the client connections it serves will be closed with the **ServerConnectionDropped** message.|
+|**ServiceTransientError**|Internal server error.|
+|**BadRequest**|A bad request is caused by an invalid hub name, wrong payload, or a malformed request.|
+|**ClosedByAppServer**|App server asked the service to close the client.|
+|**ServiceReload**|Service reload is triggered when a connection is dropped due to an internal service component reload. This event doesn't indicate a malfunction and is part of normal service operation.|
+|**ServiceModeSwitched**|Connection closed after service mode switched, such as from Serverless mode to Default mode.|
+|**Unauthorized**|The connection is unauthorized.|
-The minimum grain of message count showed in metric are 1, which means 2 KB outbound data traffic. If user sending very small amount of messages such as several bytes in a sampling time period, the message count will be 0.
+For more information, see [multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics) in Azure Monitor.
-The way to check out small amount of messages is using metrics *Outbound Traffic*, which is count by bytes.
+### Message Count granularity
-### Understand System Errors and User Errors
+The minimum Message Count granularity is two KB of outbound data traffic. Every two KB is one unit for Message Count. If a client is sending small or infrequent messages totaling less than one unit in a sampling time period, the message count will be zero (0). The count is zero even though messages were sent. The way to check for a small number/size of messages is by using the metric **Outbound Traffic**, which is a count of bytes sent.
-The Errors are the percentage of failure operations. Operations are consist of connecting, sending message and so on. The difference between System Error and User Error is that the former is the failure caused by our internal service error and the latter is caused by users. In normal case, the System Errors should be very low and near to zero.
+### System errors and user errors
+
+The **User Errors** and **System Errors** metrics are the percentage of attempted operations (connecting, sending a message, and so on) that failed. A system error is a failure in the internal system logic. A user error is generally an application error, often related to networking. Normally, the percentage of system errors should be low, near zero.
> [!IMPORTANT]
-> In some cases, the User Error will be always very high, especially in serverless case. In some browser, when user close the web page, the SignalR client doesn't close gracefully. The service will finally close it because of timeout. The timeout closure will be counted into User Error.
+> In some situations, the user errors rate will be very high, especially in Serverless mode. In some browsers, when a user closes the web page the SignalR client doesn't shut down gracefully. A connection may remain open but unresponsive, until SignalR Service will finally close it because of timeout. The timeout closure will be counted in the User Error metric.
### Metrics suitable for autoscaling
-Connection Quota Utilization and Server load are percentage metrics which show the usage **under current unit** configuration. So they could be used to set autoscaling rules. For example, you could set a rule to scale up if the server load is greater than 70%.
+>[!NOTE]
+> Autoscaling is a Premium Tier feature only.
+
+**Connection Quota Utilization** and **Server Load** show the percentage of utilization or load compared to the currently allocated unit count. These metrics are commonly used in autoscaling rules.
+
+For example, if the current allocation is one unit and there are 750 connections to the service, the Connection Quota Utilization is 750/1000 = 0.75. Server Load is calculated similarly, using values for compute capacity.
-Learn more about [autoscale](./signalr-howto-scale-autoscale.md)
+To learn more about autoscaling, see [Automatically scale units of an Azure SignalR Service](./signalr-howto-scale-autoscale.md)
## Related resources -- [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr )
+- [Automatically scale units of an Azure SignalR Service](signalr-howto-scale-autoscale.md)
+- [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md)
+- [Understanding metrics aggregation](../azure-monitor/essentials/metrics-aggregation-explained.md)
+- [Use diagnostic logs to monitor SignalR Service](signalr-howto-diagnostic-logs.md)
azure-signalr Signalr Howto Scale Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-scale-autoscale.md
description: Learn how to autoscale Azure SignalR Service.
- Previously updated : 02/11/2022 Last updated : 06/06/2022 # Automatically scale units of an Azure SignalR Service
-Autoscale allows you to have the right unit count to handle the load on your application. It allows you to add resources to handle increases in load and also save money by removing resources that are sitting idle. See [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md) to learn more about the Autoscale feature of Azure Monitor.
> [!IMPORTANT]
-> This article applies to only the **Premium** tier of Azure SignalR Service.
+> Autoscaling is only available in Azure SignalR Service Premium tier.
-By using the Autoscale feature for Azure SignalR Service, you can specify a minimum and maximum number of units and add or remove units automatically based on a set of rules.
+Azure SignalR Service Premium tier supports an *autoscale* feature, which is an implementation of [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md). Autoscale allows you to automatically scale the unit count for your SignalR Service to match the actual load on the service. Autoscale can help you optimize performance and cost for your application.
-For example, you can implement the following scaling scenarios using the Autoscale feature.
+Azure SignalR adds its own [service metrics](concept-metrics.md). However, most of the user interface is shared and common to other [Azure services that support autoscaling](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale). If you're new to the subject of Azure Monitor Metrics, review [Azure Monitor Metrics aggregation and display explained](../azure-monitor/essentials/metrics-aggregation-explained.md) before digging into SignalR Service Metrics.
-- Increase units when the Connection Quota Utilization above 70%. -- Decrease units when the Connection Quota Utilization below 20%. -- Use more units during business hours and fewer during off hours.
+## Understanding autoscale in SignalR Service
-This article shows you how you can automatically scale units in the Azure portal.
+Autoscale allows you to set conditions that will dynamically change the units allocated to SignalR Service while the service is running. Autoscale conditions are based on metrics, such as **Server Load**. Autoscale can also be configured to run on a schedule, such as every day between certain hours.
+For example, you can implement the following scaling scenarios using autoscale.
-## Autoscale setting page
-First, follow these steps to navigate to the **Scale out** page for your Azure SignalR Service.
+- Increase units when the **Connection Quota Utilization** above 70%.
+- Decrease units when the **Server Load** is below 20%.
+- Create a schedule to add more units during peak hours and reduce units during off hours.
-1. In your browser, open the [Azure portal](https://portal.azure.com).
+Multiple factors affect the performance of SignalR Service. No one metric provides a complete view of system performance. For example, if you're sending a large number of messages you might need to scale out even though the connection quota is relatively low. The combination of both **Connection Quota Utilization** and **Server Load** gives an indication of overall system load. The following guidelines apply.
-2. In your SignalR Service page, from the left menu, select **Scale out**.
+- Scale out if the connection count is over 80-90%. Scaling out before your connection count is exhausted ensures that you'll have sufficient buffer to accept new connections before scale-out takes effect.
+- Scale out if the **Server Load** is over 80-90%. Scaling early ensures that the service has enough capacity to maintain performance during the scale-out operation.
-3. Make sure the resource is in Premium Tier and you will see a **Custom autoscale** setting.
+The autoscale operation usually takes effect 3-5 minutes after it's triggered. It's important not to change the units too often. A good rule of thumb is to allow 30 minutes from the previous autoscale before performing another autoscale operation. In some cases, you might need to experiment to find the optimal autoscale interval.
+## Custom autoscale settings
-## Custom autoscale - Default condition
-You can configure automatic scaling of units by using conditions. This scale condition is executed when none of the other scale conditions match. You can set the default condition in one of the following ways:
+Open the autoscale settings page:
-- Scale based on a metric-- Scale to specific units
+1. Go to the [Azure portal](https://portal.azure.com).
+1. Open the **SignalR** service page.
+1. From the menu on the left, under **Settings** choose **Scale out**.
+1. Select the **Configure** tab. If you have a Premium tier SignalR instance, you'll see two options for **Choose how to scale your resource**:
+ - **Manual scale**, which lets you manually change the number of units.
+ - **Custom autoscale**, which lets you create autoscale conditions based on metrics and/or a time schedule.
-You can't set a schedule to autoscale on a specific days or date range for a default condition. This scale condition is executed when none of the other scale conditions with schedules match.
+1. Choose **Custom autoscale**. Use this page to manage the autoscale conditions for your Azure SignalR service.
+
+### Default scale condition
+
+When you open custom autoscale settings for the first time, you'll see the **Default** scale condition already created for you. This scale condition is executed when none of the other scale conditions match the criteria set for them. You can't delete the **Default** condition, but you can rename it, change the rules, and change the action taken by autoscale.
+
+You can't set the default condition to autoscale on a specific days or date range. The default condition only supports scaling to a unit range. To scale according to a schedule, you'll need to add a new scale condition.
+
+Autoscale doesn't take effect until you save the default condition for the first time after selecting **Custom autoscale**.
+
+## Add or change a scale condition
+
+There are two options for how to scale your Azure SignalR resource:
+
+- **Scale based on a metric** - Scale within unit limits based on a dynamic metric. One or more scale rules are defined to set the criteria used to evaluate the metric.
+- **Scale to specific units** - Scale to a specific number of units based on a date range or recurring schedule.
### Scale based on a metric
-The following procedure shows you how to add a condition to automatically increase units (scale out) when the Connection Quota Utilization is greater than 70% and decrease units (scale in) when the Connection Quota Utilization is less than 20%. Increments or decrements are done between available units.
-1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
-1. Select **Scale based on a metric** for **Scale mode**.
-1. Select **+ Add a rule**.
+The following procedure shows you how to add a condition to increase units (scale out) when the Connection Quota Utilization is greater than 70% and decrease units (scale in) when the Connection Quota Utilization is less than 20%. Increments or decrements are done between available units.
- :::image type="content" source="./media/signalr-howto-scale-autoscale/default-autoscale.png" alt-text="Default - scale based on a metric":::
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Scale based on a metric** for **Scale mode**.
+1. Select **+ Add a rule**.
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/default-autoscale.png" alt-text="Screenshot of custom rule based on a metric.":::
1. On the **Scale rule** page, follow these steps:
- 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
- 1. Select an operator and threshold values. In this example, they're **Greater than** and **70** for **Metric threshold to trigger scale action**.
- 1. Select an **operation** in the **Action** section. In this example, it's set to **Increase**.
+ 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
+ 1. Select an operator and threshold values. In this example, they're **Greater than** and **70** for **Metric threshold to trigger scale action**.
+ 1. Select an **operation** in the **Action** section. In this example, it's set to **Increase**.
1. Then, select **Add**
-
- :::image type="content" source="./media/signalr-howto-scale-autoscale/default-scale-out.png" alt-text="Default - scale out if Connection Quota Utilization is greater than 70%":::
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/default-scale-out.png" alt-text="Screenshot of default autoscale rule screen.":::
1. Select **+ Add a rule** again, and follow these steps on the **Scale rule** page:
- 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
- 1. Select an operator and threshold values. In this example, they're **Less than** and **20** for **Metric threshold to trigger scale action**.
- 1. Select an **operation** in the **Action** section. In this example, it's set to **Decrease**.
- 1. Then, select **Add**
+ 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
+ 1. Select an operator and threshold values. In this example, they're **Less than** and **20** for **Metric threshold to trigger scale action**.
+ 1. Select an **operation** in the **Action** section. In this example, it's set to **Decrease**.
+ 1. Then, select **Add**
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/default-scale-in.png" alt-text="Screenshot Connection Quota Utilization scale rule.":::
- :::image type="content" source="./media/signalr-howto-scale-autoscale/default-scale-in.png" alt-text="Default - scale in if Connection Quota Utilization is less than 20%":::
+1. Set the **minimum**, **maximum**, and **default** number of units.
+1. Select **Save** on the toolbar to save the autoscale setting.
-1. Set the **minimum** and **maximum** and **default** number of units.
+### Scale to specific units
-1. Select **Save** on the toolbar to save the autoscale setting.
-
-### Scale to specific number of units
-Follow these steps to configure the rule to scale to a specific units. Again, the default condition is applied when none of the other scale conditions match.
+Follow these steps to configure the rule to scale to a specific unit range.
-1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
-1. Select **Scale to a specific units** for **Scale mode**.
-1. For **Units**, select the number of default units.
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Scale to a specific units** for **Scale mode**.
+1. For **Units**, select the number of default units.
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/default-specific-units.png" alt-text="Screenshot of scale rule criteria.":::
- :::image type="content" source="./media/signalr-howto-scale-autoscale/default-specific-units.png" alt-text="Default - scale to specific units":::
+## Add more conditions
-## Custom autoscale - Additional conditions
-The previous section shows you how to add a default condition for the autoscale setting. This section shows you how to add more conditions to the autoscale setting. For these additional non-default conditions, you can set a schedule based on specific days of a week or a date range.
+The previous section showed you how to add a default condition for the autoscale setting. This section shows you how to add more conditions to the autoscale setting.
-### Scale based on a metric
-1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
-1. Select **Add a scale condition** under the **Default** block.
-
- :::image type="content" source="./media/signalr-howto-scale-autoscale/additional-add-condition.png" alt-text="Custom - add a scale condition link":::
-1. Confirm that the **Scale based on a metric** option is selected.
-1. Select **+ Add a rule** to add a rule to increase units when the **Connection Quota Utilization** goes above 70%. Follow steps from the [default condition](#custom-autoscaledefault-condition) section.
-5. Set the **minimum** and **maximum** and **default** number of units.
-6. You can also set a **schedule** on a custom condition (but not on the default condition). You can either specify start and end dates for the condition (or) select specific days (Monday, Tuesday, and so on.) of a week.
- 1. If you select **Specify start/end dates**, select the **Timezone**, **Start date and time** and **End date and time** (as shown in the following image) for the condition to be in effect.
- 1. If you select **Repeat specific days**, select the days of the week, timezone, start time, and end time when the condition should apply.
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Add a scale condition** under the **Default** block.
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/additional-add-condition.png" alt-text="Screenshot of custom scale rule screen.":::
+1. Confirm that the **Scale based on a metric** option is selected.
+1. Select **+ Add a rule** to add a rule to increase units when the **Connection Quota Utilization** goes above 70%. Follow steps from the [default condition](#default-scale-condition) section.
+1. Set the **minimum** and **maximum** and **default** number of units.
+1. You can also set a **schedule** on a custom condition (but not on the default condition). You can either specify start and end dates for the condition (or) select specific days (Monday, Tuesday, and so on.) of a week.
+ 1. If you select **Specify start/end dates**, select the **Timezone**, **Start date and time** and **End date and time** (as shown in the following image) for the condition to be in effect.
+ 1. If you select **Repeat specific days**, select the days of the week, timezone, start time, and end time when the condition should apply.
+
+## Next steps
+
+For more information about managing autoscale from the Azure CLI, see [**az monitor autoscale**](/cli/azure/monitor/autoscale?view=azure-cli-latest&preserve-view=true).
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
Title: Deploy Azure Video Indexer with ARM template
-description: In this tutorial you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template.
+description: Learn how to create an Azure Video Indexer account by using Azure Resource Manager (ARM) template.
Last updated 05/23/2022
## Overview
-In this tutorial you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template (preview).
+In this tutorial, you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template (preview).
The resource will be deployed to your subscription and will create the Azure Video Indexer resource based on parameters defined in the avam.template file. > [!NOTE] > This sample is *not* for connecting an existing Azure Video Indexer classic account to an ARM-based Azure Video Indexer account. > For full documentation on Azure Video Indexer API, visit the [Developer portal](https://aka.ms/avam-dev-portal) page.
-> The current API Version is "2021-10-27-preview". Check this Repo from time to time to get updates on new API Versions.
+> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
## Prerequisites
The resource will be deployed to your subscription and will create the Azure Vid
* Create a new Resource group on the same location as your Azure Video Indexer account, using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. - ```powershell New-AzResourceGroup -Name myResourceGroup -Location eastus ```
The resource will be deployed to your subscription and will create the Azure Vid
``` > [!NOTE]
-> If you would like to work with bicep format, inspect the [bicep file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Quick-Start/avam.template.bicep) on this repo.
+> If you would like to work with bicep format, see [Deploy by using Bicep](./deploy-with-bicep.md).
## Parameters
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
+
+ Title: Deploy Azure Video Indexer by using Bicep
+description: Learn how to create an Azure Video Indexer account by using a Bicep file.
+ Last updated : 06/06/2022+++
+# Tutorial: deploy Azure Video Indexer by using Bicep
+
+In this tutorial, you create an Azure Video Indexer account by using [Bicep](../azure-resource-manager/bicep/overview.md).
+
+> [!NOTE]
+> This sample is *not* for connecting an existing Azure Video Indexer classic account to an ARM-based Azure Video Indexer account.
+> For full documentation on Azure Video Indexer API, visit the [Developer portal](https://aka.ms/avam-dev-portal) page.
+> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
+
+## Prerequisites
+
+* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](/azure/media-services/latest/account-create-how-to).
+
+## Review the Bicep file
+
+The Bicep file used in this tutorial is:
++
+One Azure resource is defined in the bicep file:
+
+* [Microsoft.videoIndexer/accounts](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep)
+
+Check [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) for more updated Bicep samples.
+
+## Deploy the sample
+
+1. Save the Bicep file as main.bicep to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters accountName=<account-name> managedIdentityResourceId=<managed-identity> mediaServiceAccountResourceId=<media-service-account-resource-id>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -accountName "<account-name>" -managedIdentityResourceId "<managed-identity>" -mediaServiceAccountResourceId "<media-service-account-resource-id>"
+ ```
+
+
+
+ The location must be the same location as the existing Azure media service. You need to provide values for the parameters:
+
+ * Replace **\<account-name\>** with the name of the new Azure video indexer account.
+ * Replace **\<managed-identity\>** with the managed identity used to grant access between Azure Media Services(AMS).
+ * Replace **\<media-service-account-resource-id\>** with the existing Azure media service.
+
+## Reference documentation
+
+If you're new to Azure Video Indexer, see:
+
+* [Azure Video Indexer Documentation](./index.yml)
+* [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/)
+* After completing this tutorial, head to other Azure Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md)
+
+If you're new to Bicep deployment, see:
+
+* [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
+* [Deploy Resources with Bicep and Azure PowerShell](../azure-resource-manager/bicep/deploy-powershell.md)
+* [Deploy Resources with Bicep and Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
+
+## Next steps
+
+[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Azure VMware Solution will apply important updates starting in March 2021. You'l
All new Azure VMware Solution private clouds in regions (East US2, Canada Central, North Europe, and Japan East), are now deployed in with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
-Any existing private clouds in the above mentioned regions will also be upgraded to these versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+Any existing private clouds in the above mentioned regions will also be upgraded to these versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
## May 23, 2022
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Title: Configure vRealize Operations for Azure VMware Solution
-description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud.
+description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud.
Last updated 04/11/2022
Last updated 04/11/2022
# Configure vRealize Operations for Azure VMware Solution
-vRealize Operations is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter Server, ESXi, NSX-T Data Center, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter Server, NSX-T Data Center, vSAN, and HCX deployment.
+vRealize Operations is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter Server, ESXi, NSX-T Data Center, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter Server, NSX-T Data Center, vSAN, and HCX deployment.
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the two typical deployment topologies:
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#pre
> * [vRealize Operations running on Azure VMware Solution deployment](#vrealize-operations-running-on-azure-vmware-solution-deployment) ## Before you begin
-* Review the [vRealize Operations Manager product documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) to learn more about deploying vRealize Operations.
+* Review the [vRealize Operations Manager product documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) to learn more about deploying vRealize Operations.
* Review the basic Azure VMware Solution Software-Defined Datacenter (SDDC) [tutorial series](tutorial-network-checklist.md).
-* Optionally, review the [vRealize Operations Remote Controller](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-263F9219-E801-4383-8A59-E84F3D01ED6B.html) product documentation for the on-premises vRealize Operations managing Azure VMware Solution deployment option.
+* Optionally, review the [vRealize Operations Remote Controller](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-263F9219-E801-4383-8A59-E84F3D01ED6B.html) product documentation for the on-premises vRealize Operations managing Azure VMware Solution deployment option.
## Prerequisites
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#pre
## On-premises vRealize Operations managing Azure VMware Solution deployment
-Most customers have an existing on-premise deployment of vRealize Operations to manage one or more on-premises vCenter Server domains. When they provision an Azure VMware Solution private cloud, they connect their on-premises environment with their private cloud using an Azure ExpressRoute or a Layer 3 VPN solution.
+Most customers have an existing on-premises deployment of vRealize Operations to manage one or more on-premises vCenter Server domains. When they provision an Azure VMware Solution private cloud, they connect their on-premises environment with their private cloud using an Azure ExpressRoute or a Layer 3 VPN solution.
:::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-1.png" alt-text="Diagram showing the on-premises vRealize Operations managing Azure VMware Solution deployment." border="false":::
-To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
+To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
> [!TIP]
-> Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) for step-by-step guide for installing vRealize Operations Manager.
+> Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) for step-by-step guide for installing vRealize Operations Manager.
## vRealize Operations running on Azure VMware Solution deployment
-Another option is to deploy an instance of vRealize Operations Manager on a vSphere cluster in the private cloud.
+Another option is to deploy an instance of vRealize Operations Manager on a vSphere cluster in the private cloud.
>[!IMPORTANT] >This option isn't currently supported by VMware. :::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-2.png" alt-text="Diagram showing the vRealize Operations running on Azure VMware Solution." border="false":::
-Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter Server, ESXi, NSX-T Data Center, vSAN, and HCX.
+Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter Server, ESXi, NSX-T Data Center, vSAN, and HCX.
Once the instance has been deployed, you can configure vRealize Operations to co
- The **cloudadmin@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-identity.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case. - Workload optimization for host-based business intent doesn't work because Azure VMware Solutions manage cluster configurations, including DRS settings. - Workload optimization for the cross-cluster placement within the SDDC using the cluster-based business intent is fully supported with vRealize Operations Manager 8.0 and onwards. However, workload optimization isn't aware of resource pools and places the VMs at the cluster level. A user can manually correct it in the Azure VMware Solution vCenter Server interface.-- You can't sign in to vRealize Operations Manager using your Azure VMware Solution vCenter Server credentials.
+- You can't sign in to vRealize Operations Manager using your Azure VMware Solution vCenter Server credentials.
- Azure VMware Solution doesn't support the vRealize Operations Manager plugin. When you connect the Azure VMware Solution vCenter to vRealize Operations Manager using a vCenter Server Cloud Account, you'll see a warning:
backup Backup Azure Arm Userestapi Createorupdatepolicy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-createorupdatepolicy.md
Title: Create backup policies using REST API description: In this article, you'll learn how to create and manage backup policies (schedule and retention) using REST API. Previously updated : 08/21/2018 Last updated : 06/13/2022 ms.assetid: 5ffc4115-0ae5-4b85-a18c-8a942f6d4870+++ # Create Azure Recovery Services backup policies using REST API
For the complete list of definitions in the request body, refer to the [backup p
### Example request body
+#### For Azure VM backup
+ The following request body defines a backup policy for Azure VM backups.
-The policy says:
+This policy:
-- Take a weekly backup every Monday, Wednesday, Thursday at 10:00 AM Pacific Standard Time.-- Retain the backups taken on every Monday, Wednesday, Thursday for one week.-- Retain the backups taken on every first Wednesday and third Thursday of a month for two months (overrides the previous retention conditions, if any).-- Retain the backups taken on fourth Monday and fourth Thursday in February and November for four years (overrides the previous retention conditions, if any).
+- Takes a weekly backup every Monday, Wednesday, Thursday at 10:00 AM Pacific Standard Time.
+- Retains the backups taken on every Monday, Wednesday, Thursday for one week.
+- Retains the backups taken on every first Wednesday and third Thursday of a month for two months (overrides the previous retention conditions, if any).
+- Retains the backups taken on fourth Monday and fourth Thursday in February and November for four years (overrides the previous retention conditions, if any).
```json {
The policy says:
> [!IMPORTANT] > The time formats for schedule and retention support only DateTime. They don't support Time format alone.
+#### For SQL in Azure VM backup
+
+The following is an example request body for SQL in Azure VM backup.
+
+This policy:
+
+- Takes a full backup everyday at 13:30 UTC and a log backup every 1 hour.
+- Retains the daily full backups for 30 days and log backups for 30 days as well.
+
+```json
+"properties": {
+ "backupManagementType": "AzureWorkload",
+ "workLoadType": "SQLDataBase",
+ "settings": {
+ "timeZone": "UTC",
+ "issqlcompression": false,
+ "isCompression": false
+ },
+ "subProtectionPolicy": [
+ {
+ "policyType": "Full",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Daily",
+ "scheduleRunTimes": [
+ "2022-02-14T13:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "dailySchedule": {
+ "retentionTimes": [
+ "2022-02-14T13:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ }
+ },
+ {
+ "policyType": "Log",
+ "schedulePolicy": {
+ "schedulePolicyType": "LogSchedulePolicy",
+ "scheduleFrequencyInMins": 60
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ }
+ ],
+ "protectedItemsCount": 0
+ }
+```
+
+The following is an example of a policy that takes a differential backup everyday and a full backup once a week.
+
+```json
+"properties": {
+ "backupManagementType": "AzureWorkload",
+ "workLoadType": "SQLDataBase",
+ "settings": {
+ "timeZone": "UTC",
+ "issqlcompression": false,
+ "isCompression": false
+ },
+ "subProtectionPolicy": [
+ {
+ "policyType": "Full",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Sunday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 104,
+ "durationType": "Weeks"
+ }
+ },
+ "monthlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Months"
+ }
+ },
+ "yearlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "monthsOfYear": [
+ "January"
+ ],
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 10,
+ "durationType": "Years"
+ }
+ }
+ }
+ },
+ {
+ "policyType": "Differential",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Monday",
+ "Tuesday",
+ "Wednesday",
+ "Thursday",
+ "Friday",
+ "Saturday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T02:00:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ },
+ {
+ "policyType": "Log",
+ "schedulePolicy": {
+ "schedulePolicyType": "LogSchedulePolicy",
+ "scheduleFrequencyInMins": 120
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 15,
+ "durationType": "Days"
+ }
+ }
+ }
+ ],
+ "protectedItemsCount": 0
+ }
+```
+
+#### For SAP HANA in Azure VM backup
+
+The following is an example request body for SQL in Azure VM backup.
+
+This policy:
+
+- Takes a full backup every day at 19:30 UTC and a log backup every 2 hours.
+- Retains the daily backups for 180 days.
+- Retains the weekly backups for 104 weeks.
+- Retains the monthly backups for 60 months.
+- Retains the yearly backups for 10 years.
+- Retains the log backups for 15 days.
+
+```json
+{
+ "properties": {
+ "backupManagementType": "AzureIaasVM",
+ "timeZone": "Pacific Standard Time",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunTimes": [
+ "2018-01-24T10:00:00Z"
+ ],
+ "scheduleRunDays": [
+ "Monday",
+ "Wednesday",
+ "Thursday"
+ ]
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Monday",
+ "Wednesday",
+ "Thursday"
+ ],
+ "retentionTimes": [
+ "2018-01-24T10:00:00Z"
+ ],
+ "retentionDuration": {
+ "count": 1,
+ "durationType": "Weeks"
+ }
+ },
+ "monthlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Wednesday",
+ "Thursday"
+ ],
+ "weeksOfTheMonth": [
+ "First",
+ "Third"
+ ]
+ },
+ "retentionTimes": [
+ "2018-01-24T10:00:00Z"
+ ],
+ "retentionDuration": {
+ "count": 2,
+ "durationType": "Months"
+ }
+ },
+ "yearlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "monthsOfYear": [
+ "February",
+ "November"
+ ],
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Monday",
+ "Thursday"
+ ],
+ "weeksOfTheMonth": [
+ "Fourth"
+ ]
+ },
+ "retentionTimes": [
+ "2018-01-24T10:00:00Z"
+ ],
+ "retentionDuration": {
+ "count": 4,
+ "durationType": "Years"
+ }
+ }
+ }
+ }
+}
+```
+
+The following is an example of a policy that takes a full backup once a week and an incremental backup once a day.
+
+```json
+
+"properties": {
+ "backupManagementType": "AzureWorkload",
+ "workLoadType": "SAPHanaDatabase",
+ "settings": {
+ "timeZone": "UTC",
+ "issqlcompression": false,
+ "isCompression": false
+ },
+ "subProtectionPolicy": [
+ {
+ "policyType": "Full",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Sunday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 104,
+ "durationType": "Weeks"
+ }
+ },
+ "monthlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Months"
+ }
+ },
+ "yearlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "monthsOfYear": [
+ "January"
+ ],
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 10,
+ "durationType": "Years"
+ }
+ }
+ }
+ },
+ {
+ "policyType": "Incremental",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Monday",
+ "Tuesday",
+ "Wednesday",
+ "Thursday",
+ "Friday",
+ "Saturday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T02:00:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ },
+ {
+ "policyType": "Log",
+ "schedulePolicy": {
+ "schedulePolicyType": "LogSchedulePolicy",
+ "scheduleFrequencyInMins": 120
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 15,
+ "durationType": "Days"
+ }
+ }
+ }
+ ],
+ "protectedItemsCount": 0
+}
+
+```
++
+#### For Azure File share backup
+
+The following is an example request body for Azure File share backup.
+
+This policy:
+
+- Takes a backup every day at 15:30 UTC.
+- Retains the daily backups for 30 days.
+- Retains the backups taken every Sunday for 12 weeks.
+
+```json
+"properties": {
+ "backupManagementType": "AzureStorage",
+ "workloadType": "AzureFileShare",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Daily",
+ "scheduleRunTimes": [
+ "2022-06-13T15:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "dailySchedule": {
+ "retentionTimes": [
+ "2022-06-13T15:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ },
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionTimes": [
+ "2022-06-13T15:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 12,
+ "durationType": "Weeks"
+ }
+ }
+ },
+ "timeZone": "UTC",
+ "protectedItemsCount": 0
+ }
+```
+++ ## Responses The backup policy creation/update is a [asynchronous operation](../azure-resource-manager/management/async-operations.md). It means this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created. Then 200 (OK) when that operation completes.
|Name |Type |Description | ||||
If a policy is already being used to protect an item, any update in the policy w
For more information on the Azure Backup REST APIs, see the following documents: - [Azure Recovery Services provider REST API](/rest/api/recoveryservices/)-- [Get started with Azure REST API](/rest/api/azure/)
+- [Get started with Azure REST API](/rest/api/azure/)
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
Last updated 09/27/2021
Microsoft Azure offers a cloud infrastructure with a wide range of integrated cloud services to meet your business needs. In some cases, though, you may need to run services on bare metal servers without a virtualization layer. You may need root access and control over the operating system (OS). To meet this need, Azure offers BareMetal Infrastructure for several high-value, mission-critical applications. BareMetal Infrastructure is made up of dedicated BareMetal instances (compute instances). It features:-- High-performance storage appropriate to the application (NFS, ISCSI, and Fiber Channel). Storage can also be shared across BareMetal instances to enable features like scale-out clusters or high availability pairs with STONITH. -- A set of function-specific virtual LANs (VLANs) in an isolated environment.
-
+- High-performance storage appropriate to the application (NFS, ISCSI, and Fiber Channel). Storage can also be shared across BareMetal instances to enable features like scale-out clusters or high availability pairs with STONITH.
+- A set of function-specific virtual LANs (VLANs) in an isolated environment.
+ This environment also has special VLANs you can access if you're running virtual machines (VMs) on one or more Azure Virtual Networks (VNets) in your Azure subscription. The entire environment is represented as a resource group in your Azure subscription. BareMetal Infrastructure is offered in over 30 SKUs from 2-socket to 24-socket servers and memory ranging from 1.5 TBs up to 24 TBs. A large set of SKUs is also available with Optane memory. Azure offers the largest range of bare metal instances in a hyperscale cloud.
-## Why BareMetal Infrastructure?
+## Why BareMetal Infrastructure?
Some workloads in the enterprise consist of technologies that just aren't designed to run in a typical virtualized cloud setting. They require special architecture, certified hardware, or extraordinarily large sizes. Although those technologies have the most sophisticated data protection and business continuity features, those features aren't built for the virtualized cloud. They're more sensitive to latencies and noisy neighbors and require more control over change management and maintenance activity. BareMetal Infrastructure is built, certified, and tested for a select set of such applications. Azure was the first to offer such solutions, and has since led with the largest portfolio and most sophisticated systems.
-### BareMetal benefits
+### BareMetal benefits
-BareMetal Infrastructure is intended for critical workloads that require certification to run your enterprise applications. The BareMetal instances are dedicated only to you, and you'll have full access (root access) to the operating system (OS). You manage OS and application installation according to your needs. For security, the instances are provisioned within your Azure Virtual Network (VNet) with no internet connectivity. Only services running on your virtual machines (VMs), and other Azure services in same Tier 2 network, can communicate with your BareMetal instances.
+BareMetal Infrastructure is intended for critical workloads that require certification to run your enterprise applications. The BareMetal instances are dedicated only to you, and you'll have full access (root access) to the operating system (OS). You manage OS and application installation according to your needs. For security, the instances are provisioned within your Azure Virtual Network (VNet) with no internet connectivity. Only services running on your virtual machines (VMs), and other Azure services in same Tier 2 network, can communicate with your BareMetal instances.
BareMetal Infrastructure offers these benefits:
BareMetal Infrastructure offers these benefits:
- Non-hypervised BareMetal instance, single tenant ownership - Low latency between Azure hosted application VMs to BareMetal instances (0.35 ms) - All Flash SSD and NVMe
- - Up to 1 PB/tenant
- - IOPS up to 1.2 million/tenant
+ - Up to 1 PB/tenant
+ - IOPS up to 1.2 million/tenant
- 40/100-GB network bandwidth - Accessible via NFS, ISCSI, and FC - Redundant power, power supplies, NICs, TORs, ports, WANs, storage, and management
BareMetal Infrastructure offers these benefits:
BareMetal Infrastructure offers multiple SKUs certified for specialized workloads. Use the workload-specific SKUs to meet your needs. -- Large instances ΓÇô Ranging from two-socket to four-socket systems. -- Very Large instances ΓÇô Ranging from 4-socket to 20-socket systems.
+- Large instances ΓÇô Ranging from two-socket to four-socket systems.
+- Very Large instances ΓÇô Ranging from 4-socket to 20-socket systems.
BareMetal Infrastructure for specialized workloads is available in the following Azure regions: - West Europe
BareMetal Infrastructure for specialized workloads is available in the following
>[!NOTE] >**Zones support** refers to availability zones within a region where BareMetal instances can be deployed across zones for high resiliency and availability. This capability enables support for multi-site active-active scaling.
-## Managing BareMetal instances in Azure
+## Managing BareMetal instances in Azure
-Depending on your needs, the application topologies of BareMetal Infrastructure can be complex. You may deploy multiple instances in one or more locations. The instances can have shared or dedicated storage, and specialized LAN and WAN connections. So for BareMetal Infrastructure, Azure offers a consultation by a CSA/GBB in the field to work with you.
+Depending on your needs, the application topologies of BareMetal Infrastructure can be complex. You may deploy multiple instances in one or more locations. The instances can have shared or dedicated storage, and specialized LAN and WAN connections. So for BareMetal Infrastructure, Azure offers a consultation by a CSA/GBB in the field to work with you.
By the time your BareMetal Infrastructure is provisioned, the OS, networks, storage volumes, placements in zones and regions, and WAN connections between locations have already been configured. You're set to register your OS licenses (BYOL), configure the OS, and install the application layer.
-You'll see all the BareMetal resources, and their state and attributes, in the Azure portal. You can also operate the instances and open service requests and support tickets from there.
+You'll see all the BareMetal resources, and their state and attributes, in the Azure portal. You can also operate the instances and open service requests and support tickets from there.
## Operational model
-BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
+BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
As soon as you receive root access and full control, you assume responsibility for: - Designing and implementing backup and recovery solutions, high availability, and disaster recovery. - Licensing, security, and support for the OS and third-party software. Microsoft is responsible for:-- Providing the hardware for specialized workloads.
+- Providing the hardware for specialized workloads.
- Provisioning the OS. :::image type="content" source="media/concepts-baremetal-infrastructure-overview/baremetal-support-model.png" alt-text="Diagram of BareMetal Infrastructure support model." border="false":::
Within the multi-tenant infrastructure of the BareMetal stamp, customers are dep
## Operating system
-During the provisioning of the BareMetal instance, you can select the OS you want to install on the machines.
+During the provisioning of the BareMetal instance, you can select the OS you want to install on the machines.
>[!NOTE] >Remember, BareMetal Infrastructure is a BYOL model.
The available Linux OS versions are:
## Storage
-BareMetal Infrastructure provides highly redundant NFS storage and Fiber Channel storage. The infrastructure offers deep integration for enterprise workloads like SAP, SQL, and more. It also provides application-consistent data protection and data-management capabilities. The self-service management tools offer space-efficient snapshot, cloning, and granular replication capabilities along with single pane of glass monitoring. The infrastructure enables zero RPO and RTO capabilities for data availability and business continuity needs.
+BareMetal Infrastructure provides highly redundant NFS storage and Fiber Channel storage. The infrastructure offers deep integration for enterprise workloads like SAP, SQL, and more. It also provides application-consistent data protection and data-management capabilities. The self-service management tools offer space-efficient snapshot, cloning, and granular replication capabilities along with single pane of glass monitoring. The infrastructure enables zero RPO and RTO capabilities for data availability and business continuity needs.
The storage infrastructure offers: - Up to 4 x 100-GB uplinks. - Up to 32-GB Fiber channel uplinks. - All flash SSD and NVMe drive. - Ultra-low latency and high throughput.-- Scales up to 4 PB of raw storage.
+- Scales up to 4 PB of raw storage.
- Up to 11 million IOPS.
-These Data access protocols are supported:
-- iSCSI -- NFS (v3 or v4) -- Fiber Channel -- NVMe over FC
+These Data access protocols are supported:
+- iSCSI
+- NFS (v3 or v4)
+- Fiber Channel
+- NVMe over FC
## Networking
The architecture of Azure network services is a key component for a successful d
- The ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or higher. - Extended Active Directory and DNS in Azure, or completely running in Azure.
-ExpressRoute lets you extend your on-premises network into the Microsoft cloud over a private connection with a connectivity provider's help. You can use **ExpressRoute Local** for cost-effective data transfer between your on-premises location and the Azure region you want. To extend connectivity across geopolitical boundaries, you can enable **ExpressRoute Premium**.
+ExpressRoute lets you extend your on-premises network into the Microsoft cloud over a private connection with a connectivity provider's help. You can use **ExpressRoute Local** for cost-effective data transfer between your on-premises location and the Azure region you want. To extend connectivity across geopolitical boundaries, you can enable **ExpressRoute Premium**.
BareMetal instances are provisioned within your Azure VNet server IP address range. :::image type="content" source="media/concepts-baremetal-infrastructure-overview/baremetal-infrastructure-diagram.png" alt-text="Architectural diagram of Azure BareMetal Infrastructure diagram." lightbox="media/concepts-baremetal-infrastructure-overview/baremetal-infrastructure-diagram.png" border="false"::: The architecture shown is divided into three sections:-- **Left:** Shows the customer on-premise infrastructure that runs different applications, connecting through the partner or local edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../expressroute/expressroute-locations.md).
+- **Left:** Shows the customer on-premises infrastructure that runs different applications, connecting through the partner or local edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../expressroute/expressroute-locations.md).
- **Center:** Shows [ExpressRoute](../expressroute/expressroute-introduction.md) provisioned using your Azure subscription offering connectivity to Azure edge network. - **Right:** Shows Azure IaaS, and in this case, use of VMs to host your applications, which are provisioned within your Azure virtual network.-- **Bottom:** Shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
+- **Bottom:** Shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
>[!TIP] >To support this, your ExpressRoute Gateway should be UltraPerformance. For more information, see [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/available-sizes.md
This article describes the available virtual machine sizes for Cloud Services (e
||| |[Av2](../virtual-machines/av2-series.md) | 100 | |[D](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#d-series) | 160 |
-|[Dv2](../virtual-machines/dv2-dsv2-series.md) | 160 - 190* |
+|[Dv2](../virtual-machines/dv2-dsv2-series.md) | 210 - 250* |
|[Dv3](../virtual-machines/dv3-dsv3-series.md) | 160 - 190* | |[Dav4](../virtual-machines/dav4-dasv4-series.md) | 230 - 260 | |[Eav4](../virtual-machines/eav4-easv4-series.md) | 230 - 260 |
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
Title: Install Read OCR Docker containers from Computer Vision
description: Use the Read OCR Docker containers from Computer Vision to extract text from images and documents, on-premises. -+ Previously updated : 10/14/2021- Last updated : 06/13/2022+ keywords: on-premises, OCR, Docker, container
keywords: on-premises, OCR, Docker, container
[!INCLUDE [container hosting on the Microsoft Container Registry](../containers/includes/gated-container-hosting.md)]
-Containers enable you to run the Computer Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run Computer Vision containers.
+Containers enable you to run the Computer Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run the Read (OCR) container.
-The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
+The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
## What's new The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you are an existing customer, please follow the [download instructions](#docker-pull-for-the-read-ocr-container) to get started. ## Read 3.2 container
-The Read 3.2 OCR container latest GA model provides:
+The Read 3.2 OCR container is the latest GA model and provides:
* New models for enhanced accuracy. * Support for multiple languages within the same document. * Support for a total of 164 languages. See the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
You must meet the following prerequisites before using the containers:
|Required|Purpose| |--|--|
-|Docker Engine| You need the Docker Engine installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/install/#server). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
+|Docker Engine| You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/install/#server). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
|Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.| |Computer Vision resource |In order to use the container, you must have:<br><br>An Azure **Computer Vision** resource and the associated API key the endpoint URI. Both values are available on the Overview and Keys pages for the resource and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page|
Fill out and submit the [request form](https://aka.ms/csgate) to request approva
[!INCLUDE [Gathering required container parameters](../containers/includes/container-gathering-required-parameters.md)]
-### The host computer
+### Host computer requirements
[!INCLUDE [Host Computer requirements](../../../includes/cognitive-services-containers-host-computer.md)]
docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview
## How to use the container
-Once the container is on the [host computer](#the-host-computer), use the following process to work with the container.
+Once the container is on the [host computer](#host-computer-requirements), use the following process to work with the container.
1. [Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](computer-vision-resource-container-config.md) of the `docker run` command are available. 1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
Use bind mounts to read and write data to and from the container. You can specif
The Computer Vision containers don't use input or output mounts to store training or service data.
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](computer-vision-how-to-install-containers.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](computer-vision-how-to-install-containers.md#host-computer-requirements)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
|Optional| Name | Data type | Description | |-||--|-|
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describing-images.md
Previously updated : 01/05/2022 Last updated : 06/13/2022
At this time, English is the only supported language for image description.
## Image description example
-The following JSON response illustrates what Computer Vision returns when describing the example image based on its visual features.
+The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png)
The following JSON response illustrates what Computer Vision returns when descri
The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"description"` section.
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
+* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
## Next steps
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Previously updated : 01/05/2022 Last updated : 06/13/2022
-# Face detection with Computer Vision
+# Face detection with Image Analysis
-Computer Vision can detect human faces within an image and generate rectangle coordinates for each detected face.
+Image Analysis can detect human faces within an image and generate rectangle coordinates for each detected face.
> [!NOTE]
-> This feature is also offered by the Azure [Face](./index-identity.yml) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
+> This feature is also offered by the dedicated [Face](./overview-identity.md) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
## Face detection examples
-The following example demonstrates the JSON response returned by Computer Vision for an image containing a single human face.
+The following example demonstrates the JSON response returned by Analyze API for an image containing a single human face.
![Vision Analyze Woman Roof Face](./Images/woman_roof_face.png)
cognitive-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md
Previously updated : 10/27/2021 Last updated : 06/13/2022
cognitive-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-recognition.md
Previously updated : 10/27/2021 Last updated : 06/13/2022
cognitive-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection.md
Previously updated : 10/27/2021 Last updated : 06/13/2022
The Detect API applies tags based on the objects or living things identified in
## Object detection example
-The following JSON response illustrates what Computer Vision returns when detecting objects in the example image.
+The following JSON response illustrates what the Analyze API returns when detecting objects in the example image.
![A woman using a Microsoft Surface device in a kitchen](./Images/windows-kitchen.jpg)
cognitive-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-tagging-images.md
Previously updated : 01/05/2022 Last updated : 06/13/2022 # Apply content tags to images
-Computer Vision can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tags are not organized as a taxonomy and do not have inheritance hierarchies. A collection of content tags forms the foundation for an image [description](./concept-describing-images.md) displayed as human readable language formatted in complete sentences. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
+Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tags are not organized as a taxonomy and do not have inheritance hierarchies. A collection of content tags forms the foundation for an image [description](./concept-describing-images.md) displayed as human readable language formatted in complete sentences. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
-After you upload an image or specify an image URL, the Computer Vision algorithm can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
+After you upload an image or specify an image URL, the Analyze API can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
## Image tagging example
The following JSON response illustrates what Computer Vision returns when taggin
The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"tags"` section.
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
+* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
## Next steps
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md
Previously updated : 02/05/2022 Last updated : 06/13/2022
cognitive-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-detect-faces.md
Previously updated : 08/04/2021 Last updated : 06/13/2022 ms.devlang: csharp
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
Previously updated : 10/06/2021 Last updated : 06/13/2022 # What is Spatial Analysis?
-You can use Computer Vision Spatial Analysis to ingest streaming video from cameras, extract insights, and generate events to be used by other systems. The service detects the presence and movements of people in video. It can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you are able to learn how people use them and maximize the space's value to your organization.
+You can use Computer Vision Spatial Analysis to ingest streaming video from cameras, extract insights, and generate events to be used by other systems. The service detects the presence and movements of people in video. It can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you're able to learn how people use them and maximize the space's value to your organization.
<!--This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
This feature monitors how long people stay in an area or when they enter through
![Spatial Analysis measures dwelltime in checkout queue](https://user-images.githubusercontent.com/11428131/137016574-0d180d9b-fb9a-42a9-94b7-fbc0dbc18560.gif) ### Social distancing and facemask detection
-This feature analyzes how well people follow social distancing requirements in a space. Using the PersonDistance operation, the system automatically calibrates itself as people walk around in the space. Then it identifies when people violate a specific distance threshold (6 ft. or 10 ft.).
+This feature analyzes how well people follow social distancing requirements in a space. The system uses the PersonDistance operation to automatically calibrates itself as people walk around in the space. Then it identifies when people violate a specific distance threshold (6 ft. or 10 ft.).
![Spatial Analysis visualizes social distance violation events showing lines between people showing the distance](https://user-images.githubusercontent.com/11428131/139924062-b5e10c0f-3cf8-4ff1-bb58-478571c022d7.gif)
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
Previously updated : 02/28/2022 Last updated : 06/13/2022 keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
This documentation contains the following types of articles:
## Example use cases
-**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
+**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or to recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
**Touchless access control**: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
Previously updated : 06/21/2021 Last updated : 06/13/2022 keywords: computer vision, computer vision applications, computer vision service
Generate a description of an entire image in human-readable language, using comp
### Detect faces
-Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/>Computer Vision provides a subset of the [Face](./index-identity.yml) service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection. [Detect faces](concept-detecting-faces.md)
+Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face. [Detect faces](concept-detecting-faces.md)
+
+You can also use the dedicated [Face API](./index-identity.yml) for these purposes. It provides more detailed analysis, such as facial identification and pose detection.
### Detect image types
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Previously updated : 02/05/2022 Last updated : 06/13/2022
The Computer Vision [Read API](https://westus.dev.cognitive.microsoft.com/docs/s
The **Read** call takes images and documents as its input. They have the following requirements: * Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
-* For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed.
-* The file size must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
-* The minimum height of the text to be extracted is 12 pixels for a 1024X768 image. This corresponds to about 8 font point text at 150 DPI.
+* For PDF and TIFF files, up to 2000 pages (only the first two pages for the free tier) are processed.
+* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a size limit.
+* The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI.
## Supported languages The Read API latest generally available (GA) model supports 164 languages for print text and 9 languages for handwritten text.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md
Previously updated : 02/28/2022 Last updated : 06/13/2022 keywords: computer vision, computer vision applications, computer vision service
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
Previously updated : 03/02/2022 Last updated : 06/13/2022 ms.devlang: csharp, golang, java, javascript, python
keywords: computer vision, computer vision service
# Quickstart: Use the Optical character recognition (OCR) client library or REST API
-Get started with the Computer Vision Read REST API or client libraries. The Read service provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
+Get started with the Computer Vision Read REST API or client libraries. The Read API provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
::: zone pivot="programming-language-csharp"
cognitive-services Identity Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/identity-client-library.md
zone_pivot_groups: programming-languages-set-face
Previously updated : 09/27/2021 Last updated : 06/13/2022 ms.devlang: csharp, golang, javascript, python
cognitive-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library.md
Previously updated : 07/30/2021 Last updated : 06/13/2022 ms.devlang: csharp, golang, java, javascript, python
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
Previously updated : 10/14/2021 Last updated : 06/13/2022
The Spatial Analysis container enables you to analyze real-time streaming video
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * [!INCLUDE [contributor-requirement](../includes/quickstarts/contributor-requirement.md)]
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
- * You will need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later.
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later.
### Spatial Analysis container requirements
-To run the Spatial Analysis container, you need a compute device with an NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We will refer to this device as the host computer.
+To run the Spatial Analysis container, you need a compute device with an NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We'll refer to this device as the host computer.
#### [Azure Stack Edge device](#tab/azure-stack-edge)
Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge comp
#### Minimum hardware requirements
-* 4 GB system RAM
+* 4 GB of system RAM
* 4 GB of GPU RAM * 8 core CPU
-* 1 NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
+* One NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
* 20 GB of HDD space #### Recommended hardware
-* 32 GB system RAM
+* 32 GB of system RAM
* 16 GB of GPU RAM * 8 core CPU
-* 2 NVIDIA CUDA Compute Capable GPUs 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
+* Two NVIDIA CUDA Compute Capable GPUs 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
* 50 GB of SSD space
-In this article, you will download and install the following software packages. The host computer must be able to run the following (see below for instructions):
+In this article, you'll download and install the following software packages. The host computer must be able to run the following (see below for instructions):
* [NVIDIA graphics drivers](https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/https://docsupdatetracker.net/index.html) and [NVIDIA CUDA Toolkit](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html). The minimum GPU driver version is 460 with CUDA 11.1. * Configurations for [NVIDIA MPS](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf) (Multi-Process Service).
In this article, you will download and install the following software packages.
* [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) runtime. #### [Azure VM with GPU](#tab/virtual-machine)
-In our example, we will utilize an [NC series VM](../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) that has one K80 GPU.
+In our example, we'll utilize an [NC series VM](../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) that has one K80 GPU.
| Requirement | Description | |--|--|
-| Camera | The Spatial Analysis container is not tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
+| Camera | The Spatial Analysis container isn't tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
| Linux OS | [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) must be installed on the host computer. | ## Set up the host computer
-It is recommended that you use an Azure Stack Edge device for your host computer. Click **Desktop Machine** if you're configuring a different device, or **Virtual Machine** if you're utilizing a VM.
+We recommend that you use an Azure Stack Edge device for your host computer. Select **Desktop Machine** if you're configuring a different device, or **Virtual Machine** if you're utilizing a VM.
#### [Azure Stack Edge device](#tab/azure-stack-edge)
Spatial Analysis uses the compute features of the Azure Stack Edge to run an AI
* You have a Windows client system running PowerShell 5.0 or later, to access the device. * To deploy a Kubernetes cluster, you need to configure your Azure Stack Edge device via the **Local UI** on the [Azure portal](https://portal.azure.com/): 1. Enable the compute feature on your Azure Stack Edge device. To enable compute, go to the **Compute** page in the web interface for your device.
- 2. Select a network interface that you want to enable for compute, then click **Enable**. This will create a virtual switch on your device, on that network interface.
+ 2. Select a network interface that you want to enable for compute, then select **Enable**. This will create a virtual switch on your device, on that network interface.
3. Leave the Kubernetes test node IP addresses and the Kubernetes external services IP addresses blank.
- 4. Click **Apply**. This operation may take about two minutes.
+ 4. Select **Apply**. This operation may take about two minutes.
![Configure compute](media/spatial-analysis/configure-compute.png)
-### Set up an Edge compute role and create an IoT Hub resource
+### Set up Azure Stack Edge role and create an IoT Hub resource
-In the [Azure portal](https://portal.azure.com/), navigate to your Azure Stack Edge resource. On the **Overview** page or navigation list, click the Edge compute **Get started** button. In the **Configure Edge compute** tile, click **Configure**.
+In the [Azure portal](https://portal.azure.com/), navigate to your Azure Stack Edge resource. On the **Overview** page or navigation list, select the Edge compute **Get started** button. In the **Configure Edge compute** tile, select **Configure**.
![Link](media/spatial-analysis/configure-edge-compute-tile.png) In the **Configure Edge compute** page, choose an existing IoT Hub, or choose to create a new one. By default, a Standard (S1) pricing tier is used to create an IoT Hub resource. To use a free tier IoT Hub resource, create one and then select it. The IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource
-Click **Create**. The IoT Hub resource creation may take a couple of minutes. After the IoT Hub resource is created, the **Configure Edge compute** tile will update to show the new configuration. To confirm that the Edge compute role has been configured, select **View config** on the **Configure compute** tile.
+Select **Create**. The IoT Hub resource creation may take a couple of minutes. After the IoT Hub resource is created, the **Configure Edge compute** tile will update to show the new configuration. To confirm that the Edge compute role has been configured, select **View config** on the **Configure compute** tile.
When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. The Azure IoT Edge Runtime will already be running on the IoT Edge device.
sudo az iot hub create --name "<iothub-group-name>" --sku S1 --resource-group "<
sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled ```
-You will need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
+You'll need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
Ubuntu Server 18.04: ```bash
Then, select either **NC6** or **NC6_Promo**.
:::image type="content" source="media/spatial-analysis/promotional-selection.png" alt-text="promotional selection" lightbox="media/spatial-analysis/promotional-selection.png":::
-Next, Create the VM. Once created, navigate to the VM resource in the Azure portal and select `Extensions` from the left pane. Click on "Add" to bring up the extensions window with all available extensions. Search for and select `NVIDIA GPU Driver Extension`, click create, and complete the wizard.
+Next, Create the VM. Once created, navigate to the VM resource in the Azure portal and select `Extensions` from the left pane. Select on "Add" to bring up the extensions window with all available extensions. Search for and select `NVIDIA GPU Driver Extension`, click create, and complete the wizard.
Once the extension is successfully applied, navigate to the VM main page in the Azure portal and click `Connect`. The VM can be accessed either through SSH or RDP. RDP will be helpful as it will be enable viewing of the visualizer window (explained later). Configure the RDP access by following [these steps](../../virtual-machines/linux/use-remote-desktop.md) and opening a remote desktop connection to the VM.
sudo az iot hub create --name "<iothub-name>" --sku S1 --resource-group "<resour
sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled ```
-You will need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
+You'll need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
Ubuntu Server 18.04: ```bash
Once the deployment is complete and the container is running, the **host compute
## Configure the operations performed by Spatial Analysis
-You will need to use [Spatial Analysis operations](spatial-analysis-operations.md) to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for Spatial Analysis will generate an output stream of JSON messages, sent to your instance of Azure IoT Hub.
+You'll need to use [Spatial Analysis operations](spatial-analysis-operations.md) to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for Spatial Analysis will generate an output stream of JSON messages, sent to your instance of Azure IoT Hub.
## Use the output generated by the container If you want to start consuming the output generated by the container, see the following articles:
-* Use the Azure Event Hub SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. See [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md) for more information.
+* Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md).
* Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information. ## Running Spatial Analysis with a recorded video file
You can use Spatial Analysis with both recorded or live video. To use Spatial An
Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported).
-Click on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
+Select on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
The Spatial Analysis module will start consuming video file and will continuousl
## Troubleshooting
-If you encounter issues when starting or running the container, see [telemetry and troubleshooting](spatial-analysis-logging.md) for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
+If you encounter issues when starting or running the container, see [Telemetry and troubleshooting](spatial-analysis-logging.md) for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
[!INCLUDE [Diagnostic container](../containers/includes/diagnostics-container.md)]
If you encounter issues when starting or running the container, see [telemetry a
The Spatial Analysis container sends billing information to Azure, using a Computer Vision resource on your Azure account. The use of Spatial Analysis in public preview is currently free.
-Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
+Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Cognitive Services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
## Summary
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
The Computer Vision API v3.2 is now generally available with the following updat
* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more. * Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more. * [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
-* [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
+* [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment.
> [!div class="nextstepaction"] > [See Computer Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) ### PersonDirectory data structure
-* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
+* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
* Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](how-to/use-persondirectory.md). ## March 2021
The Computer Vision Read API v3.2 public preview, available as cloud service and
* Natural reading order for the text line output (Latin languages only) * Handwriting style classification for text lines along with a confidence score (Latin languages only). * Extract text only for selected pages for a multi-page document.
-* Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
+* Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment.
See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
A new version of the [Spatial Analysis container](spatial-analysis-container.md)
## November 2020 ### Sample Face enrollment app
-* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
+* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
## October 2020
Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-s
## April 2019 ### Improved attribute accuracy
-* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
### Improved processing speeds * Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
Previously updated : 01/12/2022 Last updated : 06/13/2022 keywords: image recognition, image recognition app, custom vision
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
Previously updated : 02/02/2022 Last updated : 06/13/2022 keywords: image recognition, image recognition app, custom vision
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
Previously updated : 02/28/2022 Last updated : 06/13/2022 keywords: image recognition, image identifier, image recognition app, custom vision
cognitive-services Image Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/quickstarts/image-classification.md
Previously updated : 09/28/2021 Last updated : 06/13/2022 ms.devlang: csharp, golang, java, javascript, python keywords: custom vision, image recognition, image recognition app, image analysis, image recognition software
cognitive-services Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/quickstarts/object-detection.md
Previously updated : 09/28/2021 Last updated : 06/13/2022 ms.devlang: csharp, golang, java, javascript, python keywords: custom vision
cognitive-services Select Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/select-domain.md
Title: "Select a domain for a Custom Vision project - Computer Vision"
description: This article will show you how to select a domain for your project in the Custom Vision Service. -+ Previously updated : 01/05/2022- Last updated : 06/13/2022+ # Select a domain for a Custom Vision project
cognitive-services Use Prediction Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/use-prediction-api.md
Previously updated : 10/27/2021 Last updated : 06/13/2022 ms.devlang: csharp
After you've trained your model, you can test images programmatically by submitting them to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs. - > [!NOTE] > This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
cognitive-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-speech-recognition-results.md
Previously updated : 03/31/2022 Last updated : 06/13/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python zone_pivot_groups: programming-languages-speech-sdk-cli
cognitive-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md
Previously updated : 02/11/2022 Last updated : 06/13/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
cognitive-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-translation.md
Previously updated : 06/07/2022 Last updated : 06/13/2022 zone_pivot_groups: programming-languages-speech-services keywords: speech translation
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
Previously updated : 01/08/2022 Last updated : 06/13/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
Previously updated : 05/31/2022 Last updated : 06/13/2022 zone_pivot_groups: programming-languages-speech-sdk
cognitive-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-audio-input-streams.md
Previously updated : 07/05/2019 Last updated : 06/13/2022 ms.devlang: csharp
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
Previously updated : 02/13/2022 Last updated : 06/13/2022 zone_pivot_groups: programming-languages-speech-services-nomore-variant
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Previously updated : 02/02/2022 Last updated : 06/13/2022
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
Previously updated : 01/16/2022 Last updated : 06/13/2022
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
Previously updated : 01/16/2022 Last updated : 06/13/2022 keywords: speech to text, speech to text software
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md
Previously updated : 01/16/2022 Last updated : 06/13/2022 keywords: speech translation
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Previously updated : 01/16/2022 Last updated : 06/13/2022 keywords: text to speech
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 06/07/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
Previously updated : 06/07/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md
Previously updated : 06/07/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Migrate Qnamaker To Question Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker-to-question-answering.md
+
+ Title: Migrate from QnA Maker to Question Answering
+description: Details on features, requirements, and examples for migrating from QnA Maker to Question Answering
+++ Last updated : 6/9/2022++
+# Migrate from QnA Maker to Question Answering
+
+**Purpose of this document:** This article aims to provide information that can be used to successfully migrate applications that use QnA Maker to Question Answering. Using this article, we hope customers will gain clarity on the following:
+
+ - Comparison of features across QnA Maker and Question Answering
+ - Pricing
+ - Simplified Provisioning and Development Experience
+ - Migration phases
+ - Common migration scenarios
+ - Migration steps
+
+**Intended Audience:** Existing QnA Maker customers
+
+> [!IMPORTANT]
+> Question Answering, a feature of Azure Cognitive Service for Language was introduced in November 2021 with several new capabilities including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration:
+>
+> - Automatic RBAC to Language project (not resource)
+> - Automatic enabling of analytics.
+
+You will also need to [re-enable analytics](analytics.md) for the language resource.
+
+## Comparison of features
+
+In addition to a new set of features, Question Answering provides many technical improvements to common features.
+
+|Feature|QnA Maker|Question Answering|Details|
+|-|||-|
+|State of the art transformer-based models|➖|✔️|Turing based models which enables to search QnA at web scale.|
+|Pre-built capability|➖|✔️|Using this capability one can leverage the power of question answering without having to ingest content and manage resources.|
+|Precise answering|➖|✔️|Question Answering supports precise answering with the help of SOTA models.|
+|Smart URL Refresh|➖|✔️|Question Answering provides a means to refresh ingested content from public sources with a single click.|
+|Q&A over knowledge base (hierarchical extraction)|✔️|✔️| |
+|Active learning|✔️|✔️|Question Answering has an improved active learning model.|
+|Alternate Questions|✔️|✔️|The improved models in question answering reduces the need to add alternate questions.|
+|Synonyms|✔️|✔️| |
+|Metadata|✔️|✔️| |
+|Question Generation (private preview)|➖|✔️|This new feature will allow generation of questions over text.|
+|Support for unstructured documents|➖|✔️|Users can now ingest unstructured documents as input sources and query the content for responses|
+|.NET SDK|✔️|✔️| |
+|API|✔️|✔️| |
+|Unified Authoring experience|➖|✔️|A single authoring experience across all Azure Cognitive Services for Language|
+|Multi region support|➖|✔️|
+
+## Pricing
+
+When you are looking at migrating to Question Answering, please consider the following:
+
+- Knowledge base/project content or size has no implications on pricing
+
+- ΓÇ£Text RecordsΓÇ¥ in Question Answering features refer to the query submitted by the user to the runtime, and it is a concept common to all features within Language Service
+
+Here you can find the pricing details for [Question Answering](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) and [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/).
+
+The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) can provide even more detail.
+
+## Simplified Provisioning and Development Experience
+
+With the Language service, QnA Maker customers now benefit from a single service that provides Text Analytics, LUIS and Question Answering as features of the language resource. The Language service provides:
+
+- One Language resource to access all above capabilities
+- A single pane of authoring experience across capabilities
+- A unified set of APIs across all the capabilities
+- A cohesive, simpler, and powerful product
+
+Learn how to get started in [Language Studio](../../language-studio.md)
+
+## Migration Phases
+
+If you or your organization have applications in development or production that use QnA Maker, you should update them to use Question Answering as soon as possible. See the following links for available APIs, SDKs, Bot SDKs and code samples.
+
+Following are the broad migration phases to consider:
+
+![A chart showing the phases of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-phases.png)
+
+Additional links which can help you are given below:
+- [Authoring portal](https://language.cognitive.azure.com/home)
+- [API](authoring.md)
+- [SDK](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker)
+- Bot SDK: For bots to use custom question answering, use the [Bot.Builder.AI.QnA](https://www.nuget.org/packages/Microsoft.Bot.Builder.AI.QnA/) SDK ΓÇô We recommend customers to continue to use this for their Bot integrations. Here are some sample usages of the same in the botΓÇÖs code: [Sample 1](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/48.customQABot-all-features) [Sample 2](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot)
+
+## Common migration scenarios
+
+This topic compares two hypothetical scenarios when migrating from QnA Maker to Question Answering. These scenarios can help you to determine the right set of migration steps to execute for the given scenario.
+
+> [!NOTE]
+> An attempt has been made to ensure these scenarios are representative of real customer migrations, however, individual customer scenarios will of course differ. Also, this article doesn't include pricing details. Click here for [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/)
+
+> [!IMPORTANT]
+> Each question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You will also need to [re-enable analytics](analytics.md) for the language resource.
+
+### Migration scenario 1: No custom authoring portal
+
+In the first migration scenario, the customer uses qnamaker.ai as the authoring portal and they want to migrate their QnA Maker knowledge bases to Custom Question Answering.
+
+[Migrate your project from QnA Maker to Question Answering](migrate-qnamaker.md)
+
+Once migrated to Question Answering:
+
+- The resource level settings need to be reconfigured for the language resource
+- Customer validations should start on the migrated knowledge bases on:
+ - Size validation
+ - Number of QnA pairs in all KBs to match pre and post migration
+- Customers need to establish new thresholds for their knowledge bases in custom question answering as the Confidence score mapping is different when compared to QnA Maker.
+ - Answers for sample questions in pre and post migration
+ - Response time for Questions answered in v1 vs v2
+ - Retaining of prompts
+ - Customers can use the batch testing tool post migration to test the newly created project in custom question answering.
+
+Old QnA Maker resources need to be manually deleted.
+
+Here are some [detailed steps](migrate-qnamaker.md) on migration scenario 1.
+
+### Migration scenario 2
+
+In this migration scenario, the customer may have created their own authoring frontend leveraging the QnA Maker authoring APIs or QnA Maker SDKs.
+
+They should perform these steps required for migration of SDKs:
+
+This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, [Azure.AI.Language.QuestionAnswering](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering), from the old one, [Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker). It will focus on side-by-side comparisons for similar operations between the two packages.
+
+They should perform the steps required for migration of Knowledge bases to the new Project within Language resource.
+
+Once migrated to Question Answering:
+- The resource level settings need to be reconfigured for the language resource
+- Customer validations should start on the migrated knowledge bases on
+ - Size validation
+ - Number of QnA pairs in all KBs to match pre and post migration
+ - Confidence score mapping
+ - Answers for sample questions in pre and post migration
+ - Response time for Questions answered in v1 vs v2
+ - Retaining of prompts
+ - Batch testing pre and post migration
+- Old QnA Maker resources need to be manually deleted.
+
+Additionally, for customers who have to migrate and upgrade Bot, upgrade bot code is published as NuGet package.
+
+Here you can find some code samples: [Sample 1](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/48.customQABot-all-features) [Sample 2](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot)
+
+Here are [detailed steps on migration scenario 2](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md)
+
+Learn more about the [pre-built API](../../../QnAMaker/How-To/using-prebuilt-api.md)
+
+Learn more about the [Question Answering Get Answers REST API](https://docs.microsoft.com/rest/api/cognitiveservices/questionanswering/question-answering/get-answers)
+
+## Migration steps
+
+Please note that some of these steps are needed depending on the customers existing architecture. Kindly look at migration phases given above for getting more clarity on which steps are needed by you for migration.
+
+![A chart showing the steps of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-steps.png)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
confidential-computing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview.md
Confidential computing allows you to isolate your sensitive data while it's bein
> [!VIDEO https://www.youtube.com/embed/rT6zMOoLEqI]
-We know that securing your cloud data is important. We hear your concerns. Here's just a few questions that our customers may have when moving sensitive workloads to the cloud:
+We know that securing your cloud data is important. We hear your concerns. Here's just a few questions that our customers may have when moving sensitive workloads to the cloud:
- How do I make sure Microsoft can't access data that isn't encrypted? - How do I prevent security threats from privileged admins inside my company?
We know that securing your cloud data is important. We hear your concerns. Here'
Azure helps you minimize your attack surface to gain stronger data protection. Azure already offers many tools to safeguard [**data at rest**](../security/fundamentals/encryption-atrest.md) through models such as client-side encryption and server-side encryption. Additionally, Azure offers mechanisms to encrypt [**data in transit**](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit) through secure protocols like TLS and HTTPS. This page introduces a third leg of data encryption - the encryption of **data in use**.
-## Introduction to confidential computing
+## Introduction to confidential computing
Confidential computing is an industry term defined by the [Confidential Computing Consortium](https://confidentialcomputing.io/) (CCC) - a foundation dedicated to defining and accelerating the adoption of confidential computing. The CCC defines confidential computing as: The protection of data in use by performing computations in a hardware-based Trusted Execution Environment (TEE).
-A TEE is an environment that enforces execution of only authorized code. Any data in the TEE can't be read or tampered with by any code outside that environment.
+A TEE is an environment that enforces execution of only authorized code. Any data in the TEE can't be read or tampered with by any code outside that environment.
### Lessen the need for trust Running workloads on the cloud requires trust. You give this trust to various providers enabling different components of your application.
Running workloads on the cloud requires trust. You give this trust to various pr
**App software vendors**: Trust software by deploying on-prem, using open-source, or by building in-house application software.
-**Hardware vendors**: Trust hardware by using on-premise hardware or in-house hardware.
+**Hardware vendors**: Trust hardware by using on-premises hardware or in-house hardware.
-**Infrastructure providers**: Trust cloud providers or manage your own on-premise data centers.
+**Infrastructure providers**: Trust cloud providers or manage your own on-premises data centers.
Azure confidential computing makes it easier to trust the cloud provider, by reducing the need for trust across various aspects of the compute cloud infrastructure. Azure confidential computing minimizes trust for the host OS kernel, the hypervisor, the VM admin, and the host admin.
confidential-ledger Authenticate Ledger Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authenticate-ledger-nodes.md
Azure confidential ledger nodes can be authenticated by code samples and by user
## Code samples
-When initializing, code samples get the node certificate by querying Identity Service. After retrieving the node certificate, a code sample will query the Ledger network to get a quote, which is then validated using the Host Verify binaries. If the verification succeeds, the code sample proceeds to Ledger operations.
+When initializing, code samples get the node certificate by querying Identity Service. After retrieving the node certificate, a code sample will query the ledger network to get a quote, which is then validated using the Host Verify binaries. If the verification succeeds, the code sample proceeds to ledger operations.
## Users
-Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their LedgerΓÇÖs enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, Steps 1-2 help build confidence in that Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger.
+Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their ledgerΓÇÖs enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, Steps 1-2 help build confidence in that Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger.
-- **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a network cert and thus helps verify that the Ledger node is presenting a cert endorsed/signed by the network cert for that specific instance. Similar to PKI-based HTTPS, a serverΓÇÖs cert is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA cert is returned by an Identity service in the form of a network cert. This is an important confidence building measure for users of confidential ledger. If this node cert isnΓÇÖt signed by the returned network cert, the client connection should fail (as implemented in the sample code).
+- **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a network cert and thus helps verify that the ledger node is presenting a cert endorsed/signed by the network cert for that specific instance. Similar to PKI-based HTTPS, a serverΓÇÖs cert is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA cert is returned by an Identity service in the form of a network cert. This is an important confidence building measure for users of confidential ledger. If this node cert isnΓÇÖt signed by the returned network cert, the client connection should fail (as implemented in the sample code).
- **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a Quote, a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote is structured in a way that enables easy verification. It contains claims that help identify various properties of the enclave and the application that it’s running. This is an important confidence building mechanism for users of the confidential ledger. This can be accomplished by calling a functional workflow API to get an enclave quote. The client connection should fail if the quote is invalid. The retrieved quote can then be validated with the open_enclaves Host_Verify tool. More details about this can be found [here](https://github.com/openenclave/openenclave/tree/master/samples/host_verify). ## Next steps
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
# Creating a Client Certificate
-The Azure confidential ledger APIs require client certificate-based authentication. Only those certificates added to an allowlist during Ledger Creation or Ledger Update can be used to call the confidential ledger Functional APIs.
+The Azure confidential ledger APIs require client certificate-based authentication. Only those certificates added to an allowlist during ledger creation or a ledger update can be used to call the confidential ledger Functional APIs.
-You will need a certificate in PEM format. You can create more than one certificate and add or delete them using Ledger Update API.
+You will need a certificate in PEM format. You can create more than one certificate and add or delete them using ledger Update API.
## OpenSSL
confidential-ledger Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/faq.md
- Title: Frequently asked questions for Azure confidential ledger
-description: Frequently asked questions for Azure confidential ledger
---- Previously updated : 04/15/2021-----
-# Frequently asked questions for Azure confidential ledger
-
-## How can I tell if the ACC Ledger service would be useful to my organization?
-
-Azure confidential ledger is ideal for organizations with records valuable enough for a motivated attacker to try to compromise the underlying logging/storage system, including "insider" scenarios where a rogue employee might attempt to forge, modify, or remove previous records.
-
-## What makes ACC Ledger much more secure?
-
-As its name suggests, the Ledger utilizes [Azure Confidential Computing platform](../confidential-computing/index.yml). One Ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The Ledger's integrity is maintained through a consensus-based blockchain.
-
-## When writing to the ACC Ledger, do I need to store write receipts?
-
-Not necessarily. Some solutions today require users to maintain write receipts for future log validation. This requires users to manage those receipts in a secure storage facility, which adds an extra burden. The Ledger eliminates this challenge through a Merkle tree-based approach, where write receipts include a full tree path to a signed root-of-trust. Users can verify transactions without storing or managing any Ledger data.
-
-## How do I verify Ledger's authenticity?
-
-You can verify that the Ledger server nodes that your client is communicating with are authentic. For details, see [Authenticating confidential ledger Nodes](authenticate-ledger-nodes.md).
---
-## Next steps
--- [Overview of Microsoft Azure confidential ledger](overview.md)
confidential-ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/overview.md
# Microsoft Azure confidential ledger (preview)
-Microsoft Azure confidential ledger (ACL), is a new and highly secure service for managing sensitive data records. Based on a permissioned blockchain model, Azure confidential ledger offers unique data integrity advantages. These include immutability, making the ledger append-only, and tamper proofing, to ensure all records are kept intact.
+Microsoft Azure confidential ledger (ACL) is a new and highly secure service for managing sensitive data records. It runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment which keeps potential attacks at bay. Furthermore, Azure confidential ledger runs on a minimalistic Trusted Computing Base (TCB), which ensures that no oneΓüáΓÇönot even MicrosoftΓüáΓÇöis "above" the ledger.
-The confidential ledger runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment which keeps potential attacks at bay. Furthermore, no one is "above" the Ledger, not even Microsoft. By designing ourselves out of the solution, Azure confidential ledger runs on a minimalistic Trusted Computing Base (TCB) which prevents access to Ledger service developers, datacenter technicians and cloud administrators.
+As its name suggests, Azure confidential ledger utilizes the [Azure Confidential Computing platform](../confidential-computing/index.yml) and the [Confidential Consortium Framework](https://www.microsoft.com/research/project/confidential-consortium-framework) to provide a high integrity solution that is tamper-protected and evident. One ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The ledger's integrity is maintained through a consensus-based blockchain.
-Azure confidential ledger appeals to use cases where critical metadata records must not be modified, including in perpetuity for regulatory compliance and archival purposes. Here are a few examples of things you can store on your Ledger:
+Azure confidential ledger offers unique data integrity advantages, including immutability, tamper-proofing, and append-only operations. These features, which ensure that all records are kept intact, are ideal when critical metadata records must not be modified, such as for regulatory compliance and archival purposes.
+
+Here are a few examples of things you can store on your ledger:
- Records relating to your business transactions (for example, money transfers or confidential document edits). - Updates to trusted assets (for example, core applications or contracts).
For more information, you can watch the [Azure confidential ledger demo](https:/
## Key Features
-The confidential ledger is exposed through REST APIs which can be integrated into new or existing applications. The confidential ledger can be managed by administrators utilizing Administrative APIs (Control Plane). It can also be called directly by application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete. The Functional APIs allow direct interaction with your instantiated Ledger and include operations such as put and get data.
+The confidential ledger is exposed through REST APIs which can be integrated into new or existing applications. The confidential ledger can be managed by administrators utilizing Administrative APIs (Control Plane). It can also be called directly by application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete. The Functional APIs allow direct interaction with your instantiated ledger and include operations such as put and get data.
## Ledger security
-This section defines the security protections for the Ledger. The Ledger APIs use client certificate-based authentication. Currently, the Ledger supports certificate-based authentication process with owner roles. We will be adding support for Azure Active Directory (AAD) based authentication and also role-based access (for example, owner, reader, and contributor).
+This section defines the security protections for the ledger. The ledger APIs use client certificate-based authentication. Currently, the ledger supports certificate-based authentication process with owner roles. We will be adding support for Azure Active Directory (AAD) based authentication and also role-based access (for example, owner, reader, and contributor).
-The data to the Ledger is sent through TLS 1.2 connection and the TLS 1.2 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
+The data to the ledger is sent through TLS 1.2 connection and the TLS 1.2 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
### Ledger storage
The confidential ledger can be managed by administrators utilizing Administrativ
The Functional APIs allow direct interaction with your instantiated confidential ledger and include operations such as put and get data.
-## Preview Limitations
+## Constraints
-- Once a confidential ledger is created, you cannot change the Ledger type.
+- Once a confidential ledger is created, you cannot change the ledger type.
- Azure confidential ledger does not support standard Azure Disaster Recovery at this time. However, Azure confidential ledger offers built-in redundancy within the Azure region, as the confidential ledger runs on multiple independent nodes. - Azure confidential ledger deletion leads to a "hard delete", so your data will not be recoverable after deletion. - Azure confidential ledger names must be globally unique. Ledgers with the same name, irrespective of their type, are not allowed.
The Functional APIs allow direct interaction with your instantiated confidential
|--|--| | ACL | Azure confidential ledger | | Ledger | An immutable append record of transactions (also known as a Blockchain) |
-| Commit | A confirmation that a transaction has been locally committed to a node. A local commit by itself does not guarantee that a transaction is part of the Ledger. |
-| Global commit | A confirmation that transaction was globally committed and is part of the Ledger. |
-| Receipt | Proof that the transaction was processed by the Ledger. |
+| Commit | A confirmation that a transaction has been locally committed to a node. A local commit by itself does not guarantee that a transaction is part of the ledger. |
+| Global commit | A confirmation that transaction was globally committed and is part of the ledger. |
+| Receipt | Proof that the transaction was processed by the ledger. |
## Next steps
confidential-ledger Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-powershell.md
+
+ Title: Quickstart ΓÇô Microsoft Azure confidential ledger with Azure PowerShell
+description: Learn to use the Microsoft Azure confidential ledger through Azure PowerShell
++ Last updated : 06/08/2022+++++
+# Quickstart: Create a confidential ledger using Azure PowerShell
+
+Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that must be kept intact. In this quickstart you will use [Azure PowerShell](/powershell/azure/) to create a confidential ledger, view and update its properties, and delete it. For more information on Azure confidential ledger, and for examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
++
+In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+
+## Create a resource group
++
+## Get your principal ID and tenant ID
+
+To create a confidential ledger, you'll need your Azure Active Directory principal ID (also called your object ID). To obtain your principal ID, use the Azure PowerShell [Get-AzADUser](/powershell/module/az.resources/get-azaduser) cmdlet, with the `-SignedIn` flag:
+
+```azurepowershell
+Get-AzADUser -SignedIn
+```
+
+Your result will be listed under "Id", in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+
+## Create a confidential ledger
+
+Use the Azure Powershell [New-AzConfidentialLedger](/powershell/module/az.confidentialledger/new-azconfidentialledger) command to create a confidential ledger in your new resource group.
+
+```azurepowershell
+New-AzConfidentialLedger -Name "myLedger" -ResourceGroupName "myResourceGroup" -Location "EastUS" -LedgerType "Public" -AadBasedSecurityPrincipal @{ LedgerRoleName="Administrator"; PrincipalId="34621747-6fc8-4771-a2eb-72f31c461f2e"; }
+
+```
+
+A successful operation will return the properties of the newly created ledger. Take note of the **ledgerUri**. In the example above, this URI is "https://myledger.confidential-ledger.azure.com".
+
+You'll need this URI to transact with the confidential ledger from the data plane.
+
+## View and update your confidential ledger properties
+
+You can view the properties associated with your newly created confidential ledger using the Azure PowerShell [Get-AzConfidentialLedger](/powershell/module/az.confidentialledger/get-azconfidentialledger) cmdlet.
+
+```azurepowershell
+Get-AzConfidentialLedger -Name "myLedger" -ResourceGroupName "myResourceGroup"
+```
+
+To update the properties of a confidential ledger, use do so, use the Azure PowerShell [Update-AzConfidentialLedger](/powershell/module/az.confidentialledger/update-azconfidentialledger) cmdlet. For instance, to update your ledger to change your role to "Reader", run:
+
+```azurepowershell
+Update-AzConfidentialLedger -Name "myLedger" -ResourceGroupName "myResourceGroup" -Location "EastUS" -LedgerType "Public" -AadBasedSecurityPrincipal @{ LedgerRoleName="Reader"; PrincipalId="34621747-6fc8-4771-a2eb-72f31c461f2e"; }
+```
+
+If you again run [Get-AzConfidentialLedger](/powershell/module/az.confidentialledger/get-azconfidentialledger), you'll see that the role has been updated.
+
+```json
+"ledgerRoleName": "Reader",
+```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you created a confidential ledger by using the Azure PowerShell. To learn more about Azure confidential ledger and how to integrate it with your applications, continue on to the articles below.
+
+- [Overview of Microsoft Azure confidential ledger](overview.md)
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Microsoft Azure confidential ledger is a new and highly secure service for manag
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-[API reference documentation](/python/api/overview/azure/keyvault-secrets-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger) | [Package (Python Package Index) Management Library](https://pypi.org/project/azure-mgmt-confidentialledger/)| [Package (Python Package Index) Client Library](https://pypi.org/project/azure-confidentialledger/)
+[API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-confidentialledger/latest/azure.confidentialledger.html) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger) | [Package (Python Package Index) Management Library](https://pypi.org/project/azure-mgmt-confidentialledger/)| [Package (Python Package Index) Client Library](https://pypi.org/project/azure-confidentialledger/)
## Prerequisites
connectors Connectors Create Api Oracledatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-oracledatabase.md
tags: connectors
Using the Oracle Database connector, you create organizational workflows that use data in your existing database. This connector can connect to an on-premises Oracle Database, or an Azure virtual machine with Oracle Database installed. With this connector, you can: * Build your workflow by adding a new customer to a customers database, or updating an order in an orders database.
-* Use actions to get a row of data, insert a new row, and even delete. For example, when a record is created in Dynamics CRM Online (a trigger), then insert a row in an Oracle Database (an action).
+* Use actions to get a row of data, insert a new row, and even delete. For example, when a record is created in Dynamics CRM Online (a trigger), then insert a row in an Oracle Database (an action).
This connector doesn't support the following items:
This article shows you how to use the Oracle Database connector in a logic app.
## Prerequisites
-* Supported Oracle versions:
+* Supported Oracle versions:
* Oracle 9 and later * Oracle Data Access Client (ODAC) 11.2 and later
-* Install the on-premises data gateway. [Connect to on-premises data from logic apps](../logic-apps/logic-apps-gateway-connection.md) lists the steps. The gateway is required to connect to an on-premises Oracle Database, or an Azure VM with Oracle DB installed.
+* Install the on-premises data gateway. [Connect to on-premises data from logic apps](../logic-apps/logic-apps-gateway-connection.md) lists the steps. The gateway is required to connect to an on-premises Oracle Database, or an Azure VM with Oracle DB installed.
> [!NOTE] > The on-premises data gateway acts as a bridge, and provides a secure data transfer between on-premises data (data that is not in the cloud) and your logic apps. The same gateway can be used with multiple services, and multiple data sources. So, you may only need to install the gateway once.
-* Install the Oracle Client on the machine where you installed the on-premises data gateway. Make sure that you install the 64-bit Oracle Data Provider for .NET from Oracle, and select the Windows installer version because the `xcopy` version doesn't work with the on-premises data gateway:
+* Install the Oracle Client on the machine where you installed the on-premises data gateway. Make sure that you install the 64-bit Oracle Data Provider for .NET from Oracle, and select the Windows installer version because the `xcopy` version doesn't work with the on-premises data gateway:
[64-bit ODAC 12c Release 4 (12.1.0.2.4) for Windows x64](https://www.oracle.com/technetwork/database/windows/downloads/index-090165.html)
This article shows you how to use the Oracle Database connector in a logic app.
## Add the connector > [!IMPORTANT]
-> This connector does not have any triggers. It has only actions. So when you create your logic app, add another trigger to start your logic app, such as **Schedule - Recurrence**, or **Request / Response - Response**.
+> This connector does not have any triggers. It has only actions. So when you create your logic app, add another trigger to start your logic app, such as **Schedule - Recurrence**, or **Request / Response - Response**.
1. In the [Azure portal](https://portal.azure.com), create a blank logic app.
-2. At the start of your logic app, select the **Request / Response - Request** trigger:
+2. At the start of your logic app, select the **Request / Response - Request** trigger:
![A dialog box has a box to search all triggers. There is also a single trigger shown, "Request / Response-Request", with a selection button.](./media/connectors-create-api-oracledatabase/request-trigger.png)
-3. Select **Save**. When you save, a request URL is automatically generated.
+3. Select **Save**. When you save, a request URL is automatically generated.
-4. Select **New step**, and select **Add an action**. Type in `oracle` to see the available actions:
+4. Select **New step**, and select **Add an action**. Type in `oracle` to see the available actions:
![A search box contains "oracle". The search produces one hit labeled "Oracle Database". There is a tabbed page, one tab showing "TRIGGERS (0)", another showing "ACTIONS (6)". Six actions are listed. The first of these is "Get row Preview".](./media/connectors-create-api-oracledatabase/oracledb-actions.png) > [!TIP]
- > This is also the quickest way to see the triggers and actions available for any connector. Type in part of the connector name, such as `oracle`. The designer lists any triggers and any actions.
+ > This is also the quickest way to see the triggers and actions available for any connector. Type in part of the connector name, such as `oracle`. The designer lists any triggers and any actions.
5. Select one of the actions, such as **Oracle Database - Get row**. Select **Connect via on-premises data gateway**. Enter the Oracle server name, authentication method, username, password, and select the gateway:
- ![The dialog box is titled "Oracle Database - Get row". There is a box, checked, labeled "Connect via on-premise data gateway". Below that are the five other text boxes.](./media/connectors-create-api-oracledatabase/create-oracle-connection.png)
+ ![The dialog box is titled "Oracle Database - Get row". There is a box, checked, labeled "Connect via on-premises data gateway". Below that are the five other text boxes.](./media/connectors-create-api-oracledatabase/create-oracle-connection.png)
6. Once connected, select a table from the list, and enter the row ID to your table. You need to know the identifier to the table. If you don't know, contact your Oracle DB administrator, and get the output from `select * from yourTableName`. This gives you the identifiable information you need to proceed.
- In the following example, job data is being returned from a Human Resources database:
+ In the following example, job data is being returned from a Human Resources database:
![The dialog box titled "Get row (Preview)" has two text boxes: "Table name", which contains "H R JOBS" and has a drop-down list, and "Row i d", which contains "S A _ REP".](./media/connectors-create-api-oracledatabase/table-rowid.png)
This article shows you how to use the Oracle Database connector in a logic app.
#### **Error**: The provider being used is deprecated: 'System.Data.OracleClient requires Oracle client software version 8.1.7 or greater.'. See [https://go.microsoft.com/fwlink/p/?LinkID=272376](/power-bi/connect-data/desktop-connect-oracle-database) to install the official provider.
-**Cause**: The Oracle client SDK is not installed on the machine where the on-premises data gateway is running. 
+**Cause**: The Oracle client SDK is not installed on the machine where the on-premises data gateway is running. 
**Resolution**: Download and install the Oracle client SDK on the same computer as the on-premises data gateway. #### **Error**: Table '[Tablename]' does not define any key columns
-**Cause**: The table does not have any primary key. 
+**Cause**: The table does not have any primary key. 
**Resolution**: The Oracle Database connector requires that a table with a primary key column be used.
-
+ ## Connector-specific details
-View any triggers and actions defined in the swagger, and also see any limits in the [connector details](/connectors/oracle/).
+View any triggers and actions defined in the swagger, and also see any limits in the [connector details](/connectors/oracle/).
## Get some help
-The [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html) is a great place to ask questions, answer questions, and see what other Logic Apps users are doing.
+The [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html) is a great place to ask questions, answer questions, and see what other Logic Apps users are doing.
-You can help improve Logic Apps and connectors by voting and submitting your ideas at [https://aka.ms/logicapps-wish](https://aka.ms/logicapps-wish).
+You can help improve Logic Apps and connectors by voting and submitting your ideas at [https://aka.ms/logicapps-wish](https://aka.ms/logicapps-wish).
## Next steps
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
See the Azure quickstart template [Create an Azure container group with VNet](ht
[az-container-delete]: /cli/azure/container#az-container-delete [az-network-vnet-delete]: /cli/azure/network/vnet#az-network-vnet-delete [az-group-delete]: /cli/azure/group#az-group-create
-[cloud-shell-bash]: /cloud-shell/overview.md
+[cloud-shell-bash]: /azure/cloud-shell/overview
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium Post
## Base setup
-### Set up PostgreSQL database if you haven't already.
+### Set up PostgreSQL database if you haven't already.
-This could be an existing on-premise database or you could [download and install one](https://www.postgresql.org/download/) on your local machine. It's also possible to use a [Docker container](https://hub.docker.com/_/postgres).
+This could be an existing on-premises database or you could [download and install one](https://www.postgresql.org/download/) on your local machine. It's also possible to use a [Docker container](https://hub.docker.com/_/postgres).
[!INCLUDE [pull-image-include](../../../includes/pull-image-include.md)] To start a container:
CREATE TABLE retail.orders_by_customer (order_id int, customer_id int, purchase_
CREATE TABLE retail.orders_by_city (order_id int, customer_id int, purchase_amount int, city text, purchase_time timestamp, PRIMARY KEY (city,order_id)) WITH cosmosdb_cell_level_timestamp=true AND cosmosdb_cell_level_timestamp_tombstones=true AND cosmosdb_cell_level_timetolive=true; ```
-### Setup Apache Kafka
+### Setup Apache Kafka
This article uses a local cluster, but you can choose any other option. [Download Kafka](https://kafka.apache.org/downloads), unzip it, start the Zookeeper and Kafka cluster.
You can continue to insert more data into PostgreSQL and confirm that the record
* [Integrate Apache Kafka and Azure Cosmos DB Cassandra API using Kafka Connect](kafka-connect.md) * [Integrate Apache Kafka Connect on Azure Event Hubs (Preview) with Debezium for Change Data Capture](../../event-hubs/event-hubs-kafka-connect-debezium.md) * [Migrate data from Oracle to Azure Cosmos DB Cassandra API using Arcion](oracle-migrate-cosmos-db-arcion.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
* [Partition key best practices](../partitioning-overview.md#choose-partitionkey) * [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Azure portal support for role management is not available yet.
### Which SDKs in Azure Cosmos DB SQL API support RBAC?
-The [.NET V3](sql-api-sdk-dotnet-standard.md), [Java V4](sql-api-sdk-java-v4.md) and [JavaScript V3](sql-api-sdk-node.md) SDKs are currently supported.
+The [.NET V3](sql-api-sdk-dotnet-standard.md), [Java V4](sql-api-sdk-java-v4.md), [JavaScript V3](sql-api-sdk-node.md) and [Python V4.3+](sql-api-sdk-python.md) SDKs are currently supported.
### Is the Azure AD token automatically refreshed by the Azure Cosmos DB SDKs when it expires?
cosmos-db Monitor Cosmos Db Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-cosmos-db-reference.md
All the metrics corresponding to Azure Cosmos DB are stored in the namespace **C
|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Legacy metric mapping | Usage | ||||| | | | | MongoRequestCharge (Mongo Request Charge) | Count (Total) |Mongo Request Units Consumed| DatabaseName, CollectionName, Region, CommandName, ErrorCode| All |Mongo Query Request Charge, Mongo Update Request Charge, Mongo Delete Request Charge, Mongo Insert Request Charge, Mongo Count Request Charge| Used to monitor Mongo resource RUs in a minute.|
-| TotalRequestUnits (Total Request Units)| Count (Total) | Request Units consumed| DatabaseName, CollectionName, Region, StatusCode |All| TotalRequestUnits| Used to monitor Total RU usage at a minute granularity. To get average RU consumed per second, use Total aggregation at minute and divide by 60.|
+| TotalRequestUnits (Total Request Units)| Count (Total) | Request Units consumed| DatabaseName, CollectionName, Region, StatusCode |All| TotalRequestUnits| Used to monitor Total RU usage at a minute granularity. To get average RU consumed per second, use Sum aggregation at minute interval/level and divide by 60.|
| ProvisionedThroughput (Provisioned Throughput)| Count (Maximum) |Provisioned throughput at container granularity| DatabaseName, ContainerName| 5M| | Used to monitor provisioned throughput per container.| ### Storage metrics
The following table lists the properties of resource logs in Azure Cosmos DB. Th
| **statusCode** | **statusCode_s** | The response status of the operation. | | **requestResourceId** | **ResourceId** | The resourceId that pertains to the request. Depending on the operation performed, this value may point to `databaseRid`, `collectionRid`, or `documentRid`.| | **clientIpAddress** | **clientIpAddress_s** | The client's IP address. |
-| **requestCharge** | **requestCharge_s** | The number of RU/s that are used by the operation |
+| **requestCharge** | **requestCharge_s** | The number of RUs that are used by the operation |
| **collectionRid** | **collectionId_s** | The unique ID for the collection.| | **duration** | **duration_d** | The duration of the operation, in milliseconds. | | **requestLength** | **requestLength_s** | The length of the request, in bytes. |
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-hbase-to-cosmos-db.md
The key differences between the data structure of Azure Cosmos DB and HBase are
* HBase uses timestamp to version multiple instances of a given cell. You can query different versions of a cell using timestamp.
-* Azure Cosmos DB ships with the [Change feed feature](../change-feed.md) which tracks persistent record of changes to a container in the order they occur. It then outputs the sorted list of documents that were changed in the order in which they were modified.
+* Azure Cosmos DB ships with the [Change feed feature](../change-feed.md) which tracks persistent record of changes to a container in the order they occur. It then outputs the sorted list of documents that were changed in the order in which they were modified.
**Data format**
sqlline.py ZOOKEEPER/hbase-unsecure
#### Get the table details ```console
-!describe <Table Name>
+!describe <Table Name>
``` #### Get the index details
sqlline.py ZOOKEEPER/hbase-unsecure
### Get the primary key details ```console
-!primarykeys <Table Name>
+!primarykeys <Table Name>
``` ## Migrate your data
Data Factory's Copy activity supports HBase as a data source. See the [Copy data
You can specify Cosmos DB (SQL API) as the destination for your data. See the [Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory](../../data-factory/connector-azure-cosmos-db.md) article for more details. ### Migrate using Apache Spark - Apache HBase Connector & Cosmos DB Spark connector
For Azure Cosmos DB Spark connector, refer to the [Quick Start Guide](create-sql
|"personalPhone":{"cf":"Personal", "col":"Phone", "type":"string"} |} |}""".stripMargin
-
+ ```
-1. Next, define a method to get the data from the HBase Contacts table as a DataFrame.
+1. Next, define a method to get the data from the HBase Contacts table as a DataFrame.
```scala def withCatalog(cat: String): DataFrame = {
For Azure Cosmos DB Spark connector, refer to the [Quick Start Guide](create-sql
.format("org.apache.spark.sql.execution.datasources.hbase") .load() }
-
+ ``` 1. Create a DataFrame using the defined method.
The mappings for code migration are shown here, but the HBase RowKeys and Azure
**HBase** ```java
-Configuration config = HBaseConfiguration.create();
-config.set("hbase.zookeeper.quorum","zookeepernode0,zookeepernode1,zookeepernode2");
-config.set("hbase.zookeeper.property.clientPort", "2181");
-config.set("hbase.cluster.distributed", "true");
+Configuration config = HBaseConfiguration.create();
+config.set("hbase.zookeeper.quorum","zookeepernode0,zookeepernode1,zookeepernode2");
+config.set("hbase.zookeeper.property.clientPort", "2181");
+config.set("hbase.cluster.distributed", "true");
Connection connection = ConnectionFactory.createConnection(config) ``` **Phoenix** ```java
-//Use JDBC to get a connection to an HBase cluster
+//Use JDBC to get a connection to an HBase cluster
Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333",props); ``` **Azure Cosmos DB** ```java
-// Create sync client
-client = new CosmosClientBuilder()
- .endpoint(AccountSettings.HOST)
- .key(AccountSettings.MASTER_KEY)
- .consistencyLevel(ConsistencyLevel.{ConsistencyLevel})
- .contentResponseOnWriteEnabled(true)
+// Create sync client
+client = new CosmosClientBuilder()
+ .endpoint(AccountSettings.HOST)
+ .key(AccountSettings.MASTER_KEY)
+ .consistencyLevel(ConsistencyLevel.{ConsistencyLevel})
+ .contentResponseOnWriteEnabled(true)
.buildClient(); ```
client = new CosmosClientBuilder()
**HBase** ```java
-// create an admin object using the config
-HBaseAdmin admin = new HBaseAdmin(config);
-// create the table...
-HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("FamilyTable"));
-// ... with single column families
-tableDescriptor.addFamily(new HColumnDescriptor("ColFam"));
+// create an admin object using the config
+HBaseAdmin admin = new HBaseAdmin(config);
+// create the table...
+HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("FamilyTable"));
+// ... with single column families
+tableDescriptor.addFamily(new HColumnDescriptor("ColFam"));
admin.createTable(tableDescriptor); ```
CREATE IF NOT EXISTS FamilyTable ("id" BIGINT not null primary key, "ColFam"."la
**Azure Cosmos DB** ```java
-// Create database if not exists
-CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
-database = client.getDatabase(databaseResponse.getProperties().getId());
+// Create database if not exists
+CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
+database = client.getDatabase(databaseResponse.getProperties().getId());
-// Create container if not exists
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("FamilyContainer", "/lastName");
+// Create container if not exists
+CosmosContainerProperties containerProperties = new CosmosContainerProperties("FamilyContainer", "/lastName");
-// Provision throughput
-ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
+// Provision throughput
+ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
-// Create container with 400 RU/s
-CosmosContainerResponse databaseResponse = database.createContainerIfNotExists(containerProperties, throughputProperties);
+// Create container with 400 RU/s
+CosmosContainerResponse databaseResponse = database.createContainerIfNotExists(containerProperties, throughputProperties);
container = database.getContainer(databaseResponse.getProperties().getId()); ```
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
# Quickstart: Azure Cosmos DB SQL API client library for .NET+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
+>
> * [.NET](quickstart-dotnet.md)
+>
Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks. > [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-dotnet-quickstart) are available on GitHub as a .NET project.
-[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md)
## Prerequisites
Get started with the Azure Cosmos DB client library for .NET to create databases
## Setting up
-This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources.
+This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources.
### Create an Azure Cosmos DB account
This quickstart will create a single Azure Cosmos DB account using the SQL API.
1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
-1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
+1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
```azurepowershell-interactive $parameters = @{
This quickstart will create a single Azure Cosmos DB account using the SQL API.
New-AzResourceGroup @parameters ```
-1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB SQL API account with default settings.
+1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB SQL API account with default settings.
```azurepowershell-interactive $parameters = @{
This quickstart will create a single Azure Cosmos DB account using the SQL API.
1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
-1. Select **Go to resource** to go to the Azure Cosmos DB account page.
+1. Select **Go to resource** to go to the Azure Cosmos DB account page.
:::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos D B SQL A P I resource.":::
This quickstart will create a single Azure Cosmos DB account using the SQL API.
:::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos D B SQL A P I account.":::
+#### [Resource Manager template](#tab/azure-resource-manager)
+
+> [!NOTE]
+> Azure Resource Manager templates are written in two syntaxes, JSON and Bicep. This sample uses the [Bicep](../../azure-resource-manager/bicep/overview.md) syntax. To learn more about the two syntaxes, see [comparing JSON and Bicep for templates](../../azure-resource-manager/bicep/compare-template-syntax.md).
+
+1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos"
+
+ # Variable for location
+ location="westus"
+
+ # Variable for account name with a randomnly generated suffix
+ let suffix=$RANDOM*$RANDOM
+ accountName="msdocs-$suffix"
+ ```
+
+1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
+
+1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
+
+ ```azurecli-interactive
+ az group create \
+ --name $resourceGroupName \
+ --location $location
+ ```
+
+1. Create a new ``.bicep`` file with the deployment template in the Bicep syntax.
+
+ :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-sql-minimal/main.bicep":::
+
+1. Deploy the Azure Resource Manager (ARM) template with [``az deployment group create``](/cli/azure/deployment/group#az-deployment-group-create)
+specifying the filename using the **template-file** parameter and the name ``initial-bicep-deploy`` using the **name** parameter.
+
+ ```azurecli-interactive
+ az deployment group create \
+ --resource-group $resourceGroupName \
+ --name initial-bicep-deploy \
+ --template-file main.bicep \
+ --parameters accountName=$accountName
+ ```
+
+ > [!NOTE]
+ > In this example, we assume that the name of the Bicep file is **main.bicep**.
+
+1. Validate the deployment by showing metadata from the newly created account using [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show).
+
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName
+ ```
+ ### Create a new .NET app
For more information about the hierarchy of different resources, see [working wi
You'll use the following .NET classes to interact with these resources: -- [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.-- [``Database``](/dotnet/api/microsoft.azure.cosmos.database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.-- [``Container``](/dotnet/api/microsoft.azure.cosmos.container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.-- [``QueryDefinition``](/dotnet/api/microsoft.azure.cosmos.querydefinition) - This class represents a SQL query and any query parameters.-- [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) - This class represents an iterator that can track the current page of results and get a new page of results.-- [``FeedResponse<>``](/dotnet/api/microsoft.azure.cosmos.feedresponse-1) - This class represents a single page of responses from the iterator. This type can be iterated over using a ``foreach`` loop.
+* [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+* [``Database``](/dotnet/api/microsoft.azure.cosmos.database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+* [``Container``](/dotnet/api/microsoft.azure.cosmos.container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+* [``QueryDefinition``](/dotnet/api/microsoft.azure.cosmos.querydefinition) - This class represents a SQL query and any query parameters.
+* [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) - This class represents an iterator that can track the current page of results and get a new page of results.
+* [``FeedResponse<>``](/dotnet/api/microsoft.azure.cosmos.feedresponse-1) - This class represents a single page of responses from the iterator. This type can be iterated over using a ``foreach`` loop.
## Code examples -- [Authenticate the client](#authenticate-the-client)-- [Create a database](#create-a-database)-- [Create a container](#create-a-container)-- [Create an item](#create-an-item)-- [Get an item](#get-an-item)-- [Query items](#query-items)
+* [Authenticate the client](#authenticate-the-client)
+* [Create a database](#create-a-database)
+* [Create a container](#create-a-container)
+* [Create an item](#create-an-item)
+* [Get an item](#get-an-item)
+* [Query items](#query-items)
-The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` database is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
For this sample code, the container will use the category as a logical partition key.
Created item: 68719518391 [gear-surf-surfboards]
When you no longer need the Azure Cosmos DB SQL API account, you can delete the corresponding resource group.
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI / Resource Manager template](#tab/azure-cli+azure-resource-manager)
Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delet
az group delete --name $resourceGroupName ```
-#### [PowerShell](#tab/azure-powershell)
+### [PowerShell](#tab/azure-powershell)
Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
$parameters = @{
Remove-AzResourceGroup @parameters ```
-#### [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
1. Navigate to the resource group you previously created in the Azure portal.
cost-management-billing Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscription.md
Use the following procedure to create a subscription for yourself or for someone
1. Select the **Advanced** tab. 1. Select your **Subscription directory**. It's the Azure Active Directory (Azure AD) where the new subscription will get created. 1. Select a **Management group**. It's the Azure AD management group that the new subscription is associated with. You can only select management groups in the current directory.
-1. Select more or more **Subscription owners**. You can select only users or service principals in the selected subscription directory. You can't select guest directory users. If you select a service principal, enter its App ID.
+1. Select one or more **Subscription owners**. You can select only users or service principals in the selected subscription directory. You can't select guest directory users. If you select a service principal, enter its App ID.
:::image type="content" source="./media/create-subscription/create-subscription-advanced-tab.png" alt-text="Screenshot showing the Advanced tab where you can specify the directory, management group, and owner. " lightbox="./media/create-subscription/create-subscription-advanced-tab.png" ::: 1. Select the **Tags** tab. 1. Enter tag pairs for **Name** and **Value**.
If you have questions or need help, [create a support request](https://go.micros
- [Add or change Azure subscription administrators](add-change-subscription-administrator.md) - [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md) - [Create management groups for resource organization and management](../../governance/management-groups/create-management-group-portal.md)-- [Cancel your Azure subscription](cancel-azure-subscription.md)
+- [Cancel your Azure subscription](cancel-azure-subscription.md)
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 04/18/2022 Last updated : 06/12/2022 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in Azure Data Factory and Synapse Analytics
The token was obtained from the original tenant, but the service is in guest ten
You should use the token issued from guest tenant. For example, you have to assign the same Azure Active Directory to be your guest tenant and your DevOps, so it can correctly set token behavior and use the correct tenant.
-### Template parameters in the parameters file are not valid
+### Template parameters in the parameters file aren't valid
#### Issue
Following section is not valid because package.json folder is not valid.
displayName: 'Validate' ``` It should have DataFactory included in customCommand like *'run build validate $(Build.Repository.LocalPath)/DataFactory/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'*. Make sure the generated YAML file for higher stage should have required JSON artifacts.-
-### Git Repository or Microsoft Purview connection disconnected
-
-#### Issue
-When deploying a service instance, the git repository or purview connection is disconnected.
-
-#### Cause
-If you have **Include in ARM template** selected for deploying global parameters, your service instance is included in the ARM template. As a result, other properties will be removed upon deployment.
-
-#### Resolution
-Unselect **Include in ARM template** and deploy global parameters with PowerShell as described in Global parameters in CI/CD.
+
### Extra left "[" displayed in published JSON file
After some amount of time, new pipeline runs begin to succeed without any user a
#### Cause
-There are several scenarios which can trigger this behavior, all of which involve a new version of a dependent resource being called by the old version of the parent resource. For example, suppose an existing child pipeline called by ΓÇ£Execute pipelineΓÇ¥ is updated to have required parameters and the existing parent pipeline is updated to pass these parameters. If the deployment occurs during a parent pipeline execution, but before the **Execute Pipeline** activity, the old version of the pipeline will call the new version of the child pipeline, and the expected parameters will not be passed. This will cause the pipeline to fail with a *UserError*. This can also occur with other types of dependencies, such as if a breaking change is made to linked service during a pipeline run that references it.
+There are several scenarios, which can trigger this behavior, all of which involve a new version of a dependent resource being called by the old version of the parent resource. For example, suppose an existing child pipeline called by ΓÇ£Execute pipelineΓÇ¥ is updated to have required parameters and the existing parent pipeline is updated to pass these parameters. If the deployment occurs during a parent pipeline execution, but before the **Execute Pipeline** activity, the old version of the pipeline will call the new version of the child pipeline, and the expected parameters will not be passed. This will cause the pipeline to fail with a *UserError*. This can also occur with other types of dependencies, such as if a breaking change is made to linked service during a pipeline run that references it.
#### Resolution
New runs of the parent pipeline will automatically begin succeeding, so typicall
Need to parameterize linked service integration run time #### Cause
-This feature is not supported.
+This feature isn't supported.
#### Resolution You have to select manually and set an integration runtime. You can use PowerShell API to change as well. This change can have downstream implications.
You have to select manually and set an integration runtime. You can use PowerShe
Changing Integration runtime name during CI/CD deployment. #### Cause
-Parameterizing an entity reference (Integration runtime in Linked service, Dataset in activity, Linked Service in dataset) is not supported. Changing the runtime name during deployment will cause the depended resource (Resource referencing the Integration runtime) to become malformed with invalid reference.
+Parameterizing an entity reference (Integration runtime in Linked service, Dataset in activity, Linked Service in dataset) isn't supported. Changing the runtime name during deployment will cause the depended resource (Resource referencing the Integration runtime) to become malformed with invalid reference.
#### Resolution Data Factory requires you to have the same name and type of integration runtime across all stages of CI/CD.
Data Factory requires you to have the same name and type of integration runtime
### ARM template deployment failing with error DataFactoryPropertyUpdateNotSupported ##### Issue
-ARM template deployment fails with an error such as DataFactoryPropertyUpdateNotSupported: Updating property type is not supported.
+ARM template deployment fails with an error such as DataFactoryPropertyUpdateNotSupported: Updating property type isn't supported.
##### Cause
-The ARM template deployment is attempting to change the type of an existing integration runtime. This is not allowed and will cause a deployment failure because data factory requires the same name and type of integration runtime across all stages of CI/CD.
+The ARM template deployment is attempting to change the type of an existing integration runtime. This isn't allowed and will cause a deployment failure because data factory requires the same name and type of integration runtime across all stages of CI/CD.
##### Resolution If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type. For more information, refer to [Continuous integration and delivery - Azure Data Factory](./continuous-integration-delivery.md#best-practices-for-cicd)
If you want to share integration runtimes across all stages, consider using a te
### GIT publish may fail because of PartialTempTemplates files #### Issue
-When you have 1000 s of old temporary ARM json files in PartialTemplates folder, publish may fail.
+When you've 1000 s of old temporary ARM json files in PartialTemplates folder, publish may fail.
#### Cause On publish, ADF fetches every file inside each folder in the collaboration branch. In the past, publishing generated two folders in the publish branch: PartialArmTemplates and LinkedTemplates. PartialArmTemplates files are no longer generated. However, because there can be many old files (thousands) in the PartialArmTemplates folder, this may result in many requests being made to GitHub on publish and the rate limit being hit.
data-factory Concepts Data Flow Performance Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-pipelines.md
If you execute your data flow activities in sequence, it is recommended that you
## Overloading a single data flow
-If you put all of your logic inside of a single data flow, the service will execute the entire job on a single Spark instance. While this may seem like a way to reduce costs, it mixes together different logical flows and can be difficult to monitor and debug. If one component fails, all other parts of the job will fail as well. Organizing data flows by independent flows of business logic is recommended. If your data flow becomes too large, splitting it into separates components will make monitoring and debugging easier. While there is no hard limit on the number of transformations in a data flow, having too many will make the job complex.
+If you put all of your logic inside of a single data flow, the service will execute the entire job on a single Spark instance. While this may seem like a way to reduce costs, it mixes together different logical flows and can be difficult to monitor and debug. If one component fails, all other parts of the job will fail as well. Organizing data flows by independent flows of business logic is recommended. If your data flow becomes too large, splitting it into separate components will make monitoring and debugging easier. While there is no hard limit on the number of transformations in a data flow, having too many will make the job complex.
## Execute sinks in parallel
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
Previously updated : 05/26/2022 Last updated : 06/10/2022 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
Specifically, this Salesforce connector supports:
- Copying data from and to Salesforce production, sandbox, and custom domain. >[!NOTE]
->This function supports copy of any schema from the above mentioned Salesforce environments, including the [Nonprofit Success Pack](https://www.salesforce.org/products/nonprofit-success-pack/) (NPSP). This allows you to bring your Salesforce nonprofit data into Azure, work with it in Azure data services, unify it with other data sets, and visualize it in Power BI for rapid insights.
+>This function supports copy of any schema from the above mentioned Salesforce environments, including the [Nonprofit Success Pack](https://www.salesforce.org/products/nonprofit-success-pack/) (NPSP).
The Salesforce connector is built on top of the Salesforce REST/Bulk API. When copying data from Salesforce, the connector automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; You can explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service. When copying data to Salesforce, the connector uses BULK API v1.
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-access-strategies.md
Last updated 01/26/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-A vital security goal of an organization is to protect their data stores from random access over the internet, may it be an on-premise or a Cloud/ SaaS data store.
+A vital security goal of an organization is to protect their data stores from random access over the internet, may it be an on-premises or a Cloud/ SaaS data store.
Typically a cloud data store controls access using the below mechanisms: * Private Link from a Virtual Network to Private Endpoint enabled data sources
Typically a cloud data store controls access using the below mechanisms:
> [!TIP] > With the [introduction of Static IP address range](./azure-integration-runtime-ip-addresses.md), you can now allow list IP ranges for the particular Azure integration runtime region to ensure you donΓÇÖt have to allow all Azure IP addresses in your cloud data stores. This way, you can restrict the IP addresses that are permitted to access the data stores.
-> [!NOTE]
-> The IP address ranges are blocked for Azure Integration Runtime and is currently only used for Data Movement, pipeline and external activities. Dataflows and Azure Integration Runtime that enable Managed Virtual Network now do not use these IP ranges.
+> [!NOTE]
+> The IP address ranges are blocked for Azure Integration Runtime and is currently only used for Data Movement, pipeline and external activities. Dataflows and Azure Integration Runtime that enable Managed Virtual Network now do not use these IP ranges.
-This should work in many scenarios, and we do understand that a unique Static IP address per integration runtime would be desirable, but this wouldn't be possible using Azure Integration Runtime currently, which is serverless. If necessary, you can always set up a Self-hosted Integration Runtime and use your Static IP with it.
+This should work in many scenarios, and we do understand that a unique Static IP address per integration runtime would be desirable, but this wouldn't be possible using Azure Integration Runtime currently, which is serverless. If necessary, you can always set up a Self-hosted Integration Runtime and use your Static IP with it.
## Data access strategies through Azure Data Factory * **[Private Link](../private-link/private-link-overview.md)** - You can create an Azure Integration Runtime within Azure Data Factory Managed Virtual Network and it will leverage private endpoints to securely connect to supported data stores. Traffic between Managed Virtual Network and data sources travels the Microsoft backbone network and are not exposure to public network.
-* **[Trusted Service](../storage/common/storage-network-security.md#exceptions)** - Azure Storage (Blob, ADLS Gen2) supports firewall configuration that enables select trusted Azure platform services to access the storage account securely. Trusted Services enforces Managed Identity authentication, which ensures no other data factory can connect to this storage unless approved to do so using it's managed identity. You can find more details in **[this blog](https://techcommunity.microsoft.com/t5/azure-data-factory/data-factory-is-now-a-trusted-service-in-azure-storage-and-azure/ba-p/964993)**. Hence, this is extremely secure and recommended.
-* **Unique Static IP** - You will need to set up a self-hosted integration runtime to get a Static IP for Data Factory connectors. This mechanism ensures you can block access from all other IP addresses.
+* **[Trusted Service](../storage/common/storage-network-security.md#exceptions)** - Azure Storage (Blob, ADLS Gen2) supports firewall configuration that enables select trusted Azure platform services to access the storage account securely. Trusted Services enforces Managed Identity authentication, which ensures no other data factory can connect to this storage unless approved to do so using it's managed identity. You can find more details in **[this blog](https://techcommunity.microsoft.com/t5/azure-data-factory/data-factory-is-now-a-trusted-service-in-azure-storage-and-azure/ba-p/964993)**. Hence, this is extremely secure and recommended.
+* **Unique Static IP** - You will need to set up a self-hosted integration runtime to get a Static IP for Data Factory connectors. This mechanism ensures you can block access from all other IP addresses.
* **[Static IP range](./azure-integration-runtime-ip-addresses.md)** - You can use Azure Integration Runtime's IP addresses to allow list it in your storage (say S3, Salesforce, etc.). It certainly restricts IP addresses that can connect to the data stores but also relies on Authentication/ Authorization rules. * **[Service Tag](../virtual-network/service-tags-overview.md)** - A service tag represents a group of IP address prefixes from a given Azure service (like Azure Data Factory). Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. It is useful when filtering data access on IaaS hosted data stores in Virtual Network.
-* **Allow Azure Services** - Some services lets you allow all Azure services to connect to it in case you choose this option.
+* **Allow Azure Services** - Some services lets you allow all Azure services to connect to it in case you choose this option.
-For more information about supported network security mechanisms on data stores in Azure Integration Runtime and Self-hosted Integration Runtime, see below two tables.
+For more information about supported network security mechanisms on data stores in Azure Integration Runtime and Self-hosted Integration Runtime, see below two tables.
* **Azure Integration Runtime** | Data Stores | Supported Network Security Mechanism on Data Stores | Private Link | Trusted Service | Static IP range | Service Tags | Allow Azure Services |
For more information about supported network security mechanisms on data stores
| | Azure SQL DB, Azure Synapse Analytics), SQL Ml | Yes (only Azure SQL DB/DW) | - | Yes | - | Yes | | | Azure Key Vault (for fetching secrets/ connection string) | yes | Yes | Yes | - | - | | Other PaaS/ SaaS Data stores | AWS S3, SalesForce, Google Cloud Storage, etc. | - | - | Yes | - | - |
- | Azure laaS | SQL Server, Oracle, etc. | - | - | Yes | Yes | - |
- | On-premise laaS | SQL Server, Oracle, etc. | - | - | Yes | - | - |
-
- **Applicable only when Azure Data Explorer is virtual network injected, and IP range can be applied on NSG/ Firewall.*
+ | Azure IaaS | SQL Server, Oracle, etc. | - | - | Yes | Yes | - |
+ | On-premises IaaS | SQL Server, Oracle, etc. | - | - | Yes | - | - |
+
+ **Applicable only when Azure Data Explorer is virtual network injected, and IP range can be applied on NSG/ Firewall.*
+
+* **Self-hosted Integration Runtime (in VNet/on-premises)**
-* **Self-hosted Integration Runtime (in Vnet/on-premise)**
-
| Data Stores | Supported Network Security Mechanism on Data Stores | Static IP | Trusted Services | |--||--|| | Azure PaaS Data stores | Azure Cosmos DB | Yes | - |
For more information about supported network security mechanisms on data stores
| | Azure Key Vault (for fetching secrets/ connection string) | Yes | Yes | | Other PaaS/ SaaS Data stores | AWS S3, SalesForce, Google Cloud Storage, etc. | Yes | - | | Azure laaS | SQL Server, Oracle, etc. | Yes | - |
- | On-premise laaS | SQL Server, Oracle, etc. | Yes | - |
+ | On-premise laaS | SQL Server, Oracle, etc. | Yes | - |
## Next steps
data-factory Encrypt Credentials Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md
Title: Encrypt credentials in Azure Data Factory
-description: Learn how to encrypt and store credentials for your on-premises data stores on a machine with self-hosted integration runtime.
+ Title: Encrypt credentials in Azure Data Factory
+description: Learn how to encrypt and store credentials for your on-premises data stores on a machine with self-hosted integration runtime.
Last updated 01/27/2022-+
You pass a JSON definition file with credentials to the <br/>[**New-AzDataFactor
## Create a linked service with encrypted credentials
-This example shows how to create a linked service to an on-premise SQL Server data source with encrypted credentials.
+This example shows how to create a linked service to an on-premises SQL Server data source with encrypted credentials.
### Create initial linked service JSON file description
-Create a JSON file named **SqlServerLinkedService.json** in any folder with the following content:
+Create a JSON file named **SqlServerLinkedService.json** in any folder with the following content:
-Replace `<servername>`, `<databasename>`, `<username>`, and `<password>` with values for your SQL Server before saving the file. And, replace `<integration runtime name>` with the name of your integration runtime.
+Replace `<servername>`, `<databasename>`, `<username>`, and `<password>` with values for your SQL Server before saving the file. And, replace `<integration runtime name>` with the name of your integration runtime.
```json {
New-AzDataFactoryV2LinkedServiceEncryptedCredential -DataFactoryName $dataFactor
Now, use the output JSON file from the previous command containing the encrypted credential to set up the **SqlServerLinkedService**. ```powershell
-Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $ResourceGroupName -Name "EncryptedSqlServerLinkedService" -DefinitionFile ".\encryptedSqlServerLinkedService.json"
+Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $ResourceGroupName -Name "EncryptedSqlServerLinkedService" -DefinitionFile ".\encryptedSqlServerLinkedService.json"
``` ## Next steps
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
Previously updated : 01/26/2022 Last updated : 06/13/2022
The below table lists the properties supported by a delta sink. You can edit the
| Vacuum | Specify retention threshold in hours for older versions of table. A value of 0 or less defaults to 30 days | yes | Integer | vacuum | | Update method | Specify which update operations are allowed on the delta lake. For methods that aren't insert, a preceding alter row transformation is required to mark rows. | yes | `true` or `false` | deletable <br> insertable <br> updateable <br> merge | | Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true |
-| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
+| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
### Delta sink script example
moviesAltered sink(
folderPath: $tempPath + '/delta' ) ~> movieDB ```
+### Delta sink with partition pruning
+With this option under Update method above (i.e. update/upsert/delete), you can limit the number of partitions that are inspected. Only partitions satisfying this condition will be fetched from the target store. You can specify fixed set of values that a partition column may take.
++
+### Delta sink script example with partition pruning
+
+A sample script is given as below.
+
+```
+DerivedColumn1 sink(
+ input(movieId as integer,
+ title as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ format: 'delta',
+ container: 'deltaContainer',
+ folderPath: 'deltaPath',
+ mergeSchema: false,
+ autoCompact: false,
+ optimizedWrite: false,
+ vacuum: 0,
+ deletable:false,
+ insertable:true,
+ updateable:true,
+ upsertable:false,
+ keys:['movieId'],
+ pruneCondition:['part_col' -> ([5, 8])],
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink2
+
+```
+Delta will only read 2 partitions where **part_col == 5 and 8** from the target delta store instead of all partitions. *part_col* is a column that the target delta data is partitioned by. It need not be present in the source data.
+
+### Delta sink optimization options
+
+In Settings tab, you will find three more options to optimize delta sink transformation.
+
+* When **Merge schema** option is enabled, any columns that are present in the previous stream, but not in the Delta table, are automatically added on to the end of the schema.
+
+* When **Auto compact** is enabled, after an individual write, transformation checks if files can further be compacted, and runs a quick OPTIMIZE job (with 128 MB file sizes instead of 1GB) to further compact files for partitions that have the most number of small files. Auto compaction helps in coalescing a large number of small files into a smaller number of large files. Auto compaction only kicks in when there are at least 50 files. Once a compaction operation is performed, it creates a new version of the table, and writes a new file containing the data of several previous files in a compact compressed form.
+
+* When **Optimize write** is enabled, sink transformation dynamically optimizes partition sizes based on the actual data by attempting to write out 128 MB files for each table partition. This is an approximate size and can vary depending on dataset characteristics. Optimized writes improve the overall efficiency of the *writes and subsequent reads*. It organizes partitions such that the performance of subsequent reads will improve.
+ ### Known limitations
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-ordered.md
# Tutorial: Create export order for Azure Data Box
-Azure Data Box is a hybrid solution that allows you to move data out of Azure into your location. This tutorial describes how to create an export order for Azure Data Box. The main reason to create an export order is for disaster recovery, in case on-premise storage gets compromised and a back-up needs to be restored.
+Azure Data Box is a hybrid solution that allows you to move data out of Azure into your location. This tutorial describes how to create an export order for Azure Data Box. The main reason to create an export order is for disaster recovery, in case on-premises storage gets compromised and a back-up needs to be restored.
In this tutorial, you learn about:
Perform the following steps in the Azure portal to order a device.
![Security screen showing Encryption type settings](./media/data-box-deploy-export-ordered/customer-managed-key-01.png) 11. Select **Customer managed key** as the key type. Then select **Select a key vault and key**.
-
+ ![Security screen, settings for a customer-managed key](./media/data-box-deploy-export-ordered/customer-managed-key-02.png) 12. On the **Select key from Azure Key Vault** screen, the subscription is automatically populated.
Perform the following steps in the Azure portal to order a device.
15. Select a user identity that you'll use to manage access to this resource. Choose **Select a user identity**. In the panel on the right, select the subscription and the managed identity to use. Then choose **Select**.
- A user-assigned managed identity is a stand-alone Azure resource that can be used to manage multiple resources. For more information, see [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md).
+ A user-assigned managed identity is a stand-alone Azure resource that can be used to manage multiple resources. For more information, see [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md).
If you need to create a new managed identity, follow the guidance in [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../../articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
-
+ ![Select a user identity](./media/data-box-deploy-export-ordered/customer-managed-key-10.png) The user identity is shown in **Encryption type** settings.
Perform the following steps in the Azure portal to order a device.
![A selected user identity shown in Encryption type settings](./media/data-box-deploy-export-ordered/customer-managed-key-11.png)
-16. If you want to enable software-based double encryption, expand **Double encryption (for high-security environments)**, and select **Enable double encryption for the order**.
+16. If you want to enable software-based double encryption, expand **Double encryption (for high-security environments)**, and select **Enable double encryption for the order**.
The software-based encryption is performed in addition to the AES-256 bit encryption of the data on the Data Box.
Follow these guidelines to create your XML file if you choose to select blobs an
### [Sample XML file](#tab/sample-xml-file)
-This sample XML file includes examples of each XML tag that is used to select blobs and files for export in a Data Box export order.
+This sample XML file includes examples of each XML tag that is used to select blobs and files for export in a Data Box export order.
- For a XML file requirements, go to the **XML file overview** tab. - For more examples of valid blob and file prefixes, go to the **Prefix examples** tab.
This sample XML file includes examples of each XML tag that is used to select bl
<BlobPathPrefix>/container</BlobPathPrefix> <!-- Exports all containers beginning with prefix: "container" --> <BlobPathPrefix>/container1/2021Q2</BlobPathPrefix> <!-- Exports all blobs in container1 with prefix: "2021Q2" --> </BlobList>
-
+ <!--AzureFileList selects individual files (FilePath) and multiple files (FilePathPrefix) in Azure File storage for export.--> <AzureFileList> <FilePath>/fileshare1/file.txt</FilePath> <!-- Exports /fileshare1/file.txt -->
Data Box copies data from the source storage account(s). Once the data copy is c
![Data Box export order, data copy complete](media/data-box-deploy-export-ordered/azure-data-box-export-order-data-copy-complete.png)
-The data export from Azure Storage to your Data Box can sometimes fail. Make sure that the blobs aren't archive blobs as export of these blobs is not supported.
+The data export from Azure Storage to your Data Box can sometimes fail. Make sure that the blobs aren't archive blobs as export of these blobs is not supported.
> [!NOTE] > For archive blobs, you need rehydrate those blobs before they can be exported from Azure Storage account to your Data Box. For more information, see [Rehydrate an archive blob]( ../storage/blobs/storage-blob-rehydration.md).
databox Data Box Disk Deploy Upload Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-upload-verify.md
Previously updated : 12/17/2021 Last updated : 05/05/2022 # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.
When Microsoft receives and scans the disk, job status is updated to **Received*
The data automatically gets copied once the disks are connected to a server in the Azure datacenter. Depending upon the data size, the copy operation may take a few hours to days to complete. You can monitor the copy job progress in the portal.
-Once the copy is complete, order status updates to **Completed**. The **DATA COPY DETAILS** show the path to the copy log, which reports any errors during the data copy.
+Once the copy is complete, order status updates to **Completed**. The **DATA COPY DETAILS** show the path to the copy log, which reports any errors during the data copy.
-![Screenshot of the Overview pane for a Data Box Disk import order in Copy Completed state. The Overview option, Copy Completed order status, and Copy Log Path are highlighted.](media/data-box-disk-deploy-picked-up/data-box-portal-completed.png)
+As of March 2022, you can choose **View by Storage Account(s)** or **View by Managed Disk(s)** to display the data copy details.
+
+[![Screenshot of the Data Copy Details.](media/data-box-disk-deploy-picked-up/data-box-portal-completed.png)](media/data-box-disk-deploy-picked-up/data-box-portal-completed-inline.png#lightbox)
+
+If you have an order from before March 2022, the data copy details will be shown as below:
+ If the copy completes with errors, see [troubleshoot upload errors](data-box-disk-troubleshoot-upload.md).
databox Data Box Export Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-export-logs.md
Title: Track and log Azure Data Box, Azure Data Box Heavy events for export order| Microsoft Docs
+ Title: Track and log Azure Data Box, Azure Data Box Heavy events for export order| Microsoft Docs
description: Describes how to track and log events at the various stages of your Azure Data Box and Azure Data Box Heavy export order.
You can control who can access your order when the order is first created. Set u
The two roles that can be defined for the Azure Data Box service are: - **Data Box Reader** - have read-only access to an order(s) as defined by the scope. They can only view details of an order. They canΓÇÖt access any other details related to storage accounts or edit the order details such as address and so on.-- **Data Box Contributor** - can only create an order to transfer data to a given storage account *if they already have write access to a storage account*. If they do not have access to a storage account, they can't even create a Data Box order to copy data to the account. This role does not define any Storage account related permissions nor grants access to storage accounts.
+- **Data Box Contributor** - can only create an order to transfer data to a given storage account *if they already have write access to a storage account*. If they do not have access to a storage account, they can't even create a Data Box order to copy data to the account. This role does not define any Storage account related permissions nor grants access to storage accounts.
To restrict access to an order, you can:
You can track your order through the Azure portal and through the shipping carri
## Query activity logs during setup -- Your Data Box arrives on your premises in a locked state. You can use the device credentials available in the Azure portal for your order.
+- Your Data Box arrives on your premises in a locked state. You can use the device credentials available in the Azure portal for your order.
When a Data Box is set up, you may need to know who all accessed the device credentials. To figure out who accessed the **Device credentials** blade, you can query the Activity logs. Any action that involves accessing **Device details > Credentials** blade is logged into the activity logs as `ListCredentials` action.
You can track your order through the Azure portal and through the shipping carri
## View logs during data copy
-Before you copy data from your Data Box, you can download and review *copy log* and *verbose log* for the data that was copied to the Data Box. These logs are generated when the data is copied from your Storage account in Azure to your Data Box.
+Before you copy data from your Data Box, you can download and review *copy log* and *verbose log* for the data that was copied to the Data Box. These logs are generated when the data is copied from your Storage account in Azure to your Data Box.
### Copy log
Here is a sample output of *copy log* when there were no errors and all the file
<TotalFiles_Blobs>5521</TotalFiles_Blobs> <FilesErrored>0</FilesErrored> </CopyLog>
-```
-
+```
+ Here is a sample output when the *copy log* has errors and some of the files failed to copy from Azure. ```output
Here is a sample output when the *copy log* has errors and some of the files fai
<Status>Failed</Status> <TotalFiles_Blobs>4</TotalFiles_Blobs> <FilesErrored>3</FilesErrored>
-</CopyLog>
+</CopyLog>
```
-You have the following options to export those files:
+You have the following options to export those files:
-- You can transfer the files that could not be copied over the network.
+- You can transfer the files that could not be copied over the network.
- If your data size was larger than the usable device capacity, then a partial copy occurs and all the files that were not copied are listed in this log. You can use this log as an input XML to create a new Data Box order and then copy over these files. ### Verbose log
The verbose log has the information in the following format:
`<file size = "file-size-in-bytes" crc64="cyclic-redundancy-check-string">\folder-path-on-data-box\name-of-file-copied.md</file>`
-Here is a sample output of the verbose log.
+Here is a sample output of the verbose log.
```powershell <File CloudFormat="BlockBlob" Path="validblobdata/test1.2.3.4" Size="1024" crc64="7573843669953104266">
The copy log path is also displayed on the **Overview** blade for the portal.
<!-- add a screenshot-->
-You can use these logs to verify that files copied from Azure match the data that was copied to your on-premises server.
+You can use these logs to verify that files copied from Azure match the data that was copied to your on-premises server.
Use your verbose log file: - To verify against the actual names and the number of files that were copied from the Data Box. - To verify against the actual sizes of the files.-- To verify that the *crc64* corresponds to a non-zero string. A Cyclic Redundancy Check (CRC) computation is done during the export from Azure. The CRCs from the export and after the data is copied from Data Box to on-premise server can be compared. A CRC mismatch indicates that the corresponding files failed to copy properly.
+- To verify that the *crc64* corresponds to a non-zero string. A Cyclic Redundancy Check (CRC) computation is done during the export from Azure. The CRCs from the export and after the data is copied from Data Box to on-premises server can be compared. A CRC mismatch indicates that the corresponding files failed to copy properly.
## Get chain of custody logs after data erasure
New Logon:
Logon GUID: {00000000-0000-0000-0000-000000000000} Process Information: Process ID: 0x4
- Process Name:
+ Process Name:
Network Information: Workstation Name: - Source Network Address: -
Detailed Authentication Information:
Transited Package Name (NTLM only): - Key Length: 0
-This event is generated when a logon session is created. It is generated on the computer that was accessed.
-The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.
+This event is generated when a logon session is created. It is generated on the computer that was accessed.
+The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.
The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network). The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on. The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.
Here is a sample of the order history log from Azure portal:
- Microsoft Data Box Order Report -
-Name : gus-poland
-StartTime(UTC) : 9/19/2018 8:49:23 AM +00:00
-DeviceType : DataBox
+Name : gus-poland
+StartTime(UTC) : 9/19/2018 8:49:23 AM +00:00
+DeviceType : DataBox
- Data Box Activities -
Time(UTC) | Activity | Status | D
9/19/2018 8:49:26 AM | OrderCreated | Completed | 10/2/2018 7:32:53 AM | DevicePrepared | Completed |
-10/3/2018 1:36:43 PM | ShippingToCustomer | InProgress | Shipment picked up. Local Time : 10/3/2018 1:36:43 PM at AMSTERDAM-NLD
-10/4/2018 8:23:30 PM | ShippingToCustomer | InProgress | Processed at AMSTERDAM-NLD. Local Time : 10/4/2018 8:23:30 PM at AMSTERDAM-NLD
+10/3/2018 1:36:43 PM | ShippingToCustomer | InProgress | Shipment picked up. Local Time : 10/3/2018 1:36:43 PM at AMSTERDAM-NLD
+10/4/2018 8:23:30 PM | ShippingToCustomer | InProgress | Processed at AMSTERDAM-NLD. Local Time : 10/4/2018 8:23:30 PM at AMSTERDAM-NLD
10/4/2018 11:43:34 PM | ShippingToCustomer | InProgress | Departed Facility in AMSTERDAM-NLD. Local Time : 10/4/2018 11:43:34 PM at AMSTERDAM-NLD
-10/5/2018 8:13:49 AM | ShippingToCustomer | InProgress | Arrived at Delivery Facility in BRIGHTON-GBR. Local Time : 10/5/2018 8:13:49 AM at LAMBETH-GBR
-10/5/2018 9:13:24 AM | ShippingToCustomer | InProgress | With delivery courier. Local Time : 10/5/2018 9:13:24 AM at BRIGHTON-GBR
-10/5/2018 12:03:04 PM | ShippingToCustomer | Completed | Delivered - Signed for by. Local Time : 10/5/2018 12:03:04 PM at BRIGHTON-GBR
-1/25/2019 3:19:25 PM | ShippingToDataCenter | InProgress | Shipment picked up. Local Time : 1/25/2019 3:19:25 PM at BRIGHTON-GBR
-1/25/2019 8:03:55 PM | ShippingToDataCenter | InProgress | Processed at BRIGHTON-GBR. Local Time : 1/25/2019 8:03:55 PM at LAMBETH-GBR
-1/25/2019 8:04:58 PM | ShippingToDataCenter | InProgress | Departed Facility in BRIGHTON-GBR. Local Time : 1/25/2019 8:04:58 PM at BRIGHTON-GBR
-1/25/2019 9:06:09 PM | ShippingToDataCenter | InProgress | Arrived at Sort Facility LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:06:09 PM at LONDON-HEATHROW-GBR
-1/25/2019 9:48:54 PM | ShippingToDataCenter | InProgress | Processed at LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:48:54 PM at LONDON-HEATHROW-GBR
+10/5/2018 8:13:49 AM | ShippingToCustomer | InProgress | Arrived at Delivery Facility in BRIGHTON-GBR. Local Time : 10/5/2018 8:13:49 AM at LAMBETH-GBR
+10/5/2018 9:13:24 AM | ShippingToCustomer | InProgress | With delivery courier. Local Time : 10/5/2018 9:13:24 AM at BRIGHTON-GBR
+10/5/2018 12:03:04 PM | ShippingToCustomer | Completed | Delivered - Signed for by. Local Time : 10/5/2018 12:03:04 PM at BRIGHTON-GBR
+1/25/2019 3:19:25 PM | ShippingToDataCenter | InProgress | Shipment picked up. Local Time : 1/25/2019 3:19:25 PM at BRIGHTON-GBR
+1/25/2019 8:03:55 PM | ShippingToDataCenter | InProgress | Processed at BRIGHTON-GBR. Local Time : 1/25/2019 8:03:55 PM at LAMBETH-GBR
+1/25/2019 8:04:58 PM | ShippingToDataCenter | InProgress | Departed Facility in BRIGHTON-GBR. Local Time : 1/25/2019 8:04:58 PM at BRIGHTON-GBR
+1/25/2019 9:06:09 PM | ShippingToDataCenter | InProgress | Arrived at Sort Facility LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:06:09 PM at LONDON-HEATHROW-GBR
+1/25/2019 9:48:54 PM | ShippingToDataCenter | InProgress | Processed at LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:48:54 PM at LONDON-HEATHROW-GBR
1/25/2019 10:30:20 PM | ShippingToDataCenter | InProgress | Departed Facility in LONDON-HEATHROW-GBR. Local Time : 1/25/2019 10:30:20 PM at LONDON-HEATHROW-GBR
-1/28/2019 7:11:35 AM | ShippingToDataCenter | InProgress | Arrived at Delivery Facility in AMSTERDAM-NLD. Local Time : 1/28/2019 7:11:35 AM at AMSTERDAM-NLD
-1/28/2019 9:07:57 AM | ShippingToDataCenter | InProgress | With delivery courier. Local Time : 1/28/2019 9:07:57 AM at AMSTERDAM-NLD
-1/28/2019 1:35:56 PM | ShippingToDataCenter | InProgress | Scheduled for delivery. Local Time : 1/28/2019 1:35:56 PM at AMSTERDAM-NLD
+1/28/2019 7:11:35 AM | ShippingToDataCenter | InProgress | Arrived at Delivery Facility in AMSTERDAM-NLD. Local Time : 1/28/2019 7:11:35 AM at AMSTERDAM-NLD
+1/28/2019 9:07:57 AM | ShippingToDataCenter | InProgress | With delivery courier. Local Time : 1/28/2019 9:07:57 AM at AMSTERDAM-NLD
+1/28/2019 1:35:56 PM | ShippingToDataCenter | InProgress | Scheduled for delivery. Local Time : 1/28/2019 1:35:56 PM at AMSTERDAM-NLD
1/28/2019 2:57:48 PM | ShippingToDataCenter | Completed | Delivered - Signed for by. Local Time : 1/28/2019 2:57:48 PM at AMSTERDAM-NLD 1/29/2019 2:18:43 PM | PhysicalVerification | Completed | 1/29/2019 3:49:50 PM | DeviceBoot | Completed | Appliance booted up successfully.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
If you've [configured automations](workflow-automation.md) or defined [alert sup
Access from a Tor exit node might indicate a threat actor trying to hide their identity.
-The alert is now tuned to generate only for authenticated access, which results in higher accuracy and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
+The alert is now tuned to generate only for authenticated access, which results in higher accuracy and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
An outlying pattern will have high severity, while less anomalous patterns will have medium severity.
Learn more about how to [Explore and manage your resources with asset inventory]
Updates in January include: - [Azure Security Benchmark is now the default policy initiative for Azure Security Center](#azure-security-benchmark-is-now-the-default-policy-initiative-for-azure-security-center)-- [Vulnerability assessment for on-premise and multicloud machines is released for general availability (GA)](#vulnerability-assessment-for-on-premise-and-multicloud-machines-is-released-for-general-availability-ga)
+- [Vulnerability assessment for on-premises and multicloud machines is released for general availability (GA)](#vulnerability-assessment-for-on-premises-and-multicloud-machines-is-released-for-general-availability-ga)
- [Secure score for management groups is now available in preview](#secure-score-for-management-groups-is-now-available-in-preview) - [Secure score API is released for general availability (GA)](#secure-score-api-is-released-for-general-availability-ga) - [Dangling DNS protections added to Azure Defender for App Service](#dangling-dns-protections-added-to-azure-defender-for-app-service)
To learn more, see the following pages:
- [Learn more about Azure Security Benchmark](/security/benchmark/azure/introduction) - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md)
-### Vulnerability assessment for on-premise and multicloud machines is released for general availability (GA)
+### Vulnerability assessment for on-premises and multicloud machines is released for general availability (GA)
In October, we announced a preview for scanning Azure Arc-enabled servers with [Azure Defender for Servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys).
The Security Center experience within SQL provides access to the following Secur
- **Security recommendations** ΓÇô Security Center periodically analyzes the security state of all connected Azure resources to identify potential security misconfigurations. It then provides recommendations on how to remediate those vulnerabilities and improve organizationsΓÇÖ security posture. - **Security alerts** ΓÇô a detection service that continuously monitors Azure SQL activities for threats such as SQL injection, brute-force attacks, and privilege abuse. This service triggers detailed and action-oriented security alerts in Security Center and provides options for continuing investigations with Azure Sentinel, MicrosoftΓÇÖs Azure-native SIEM solution.-- **Findings** ΓÇô a vulnerability assessment service that continuously monitors Azure SQL configurations and helps remediate vulnerabilities. Assessment scans provide an overview of Azure SQL security states together with detailed security findings.
+- **Findings** ΓÇô a vulnerability assessment service that continuously monitors Azure SQL configurations and helps remediate vulnerabilities. Assessment scans provide an overview of Azure SQL security states together with detailed security findings.
:::image type="content" source="media/release-notes/microsoft-defender-for-cloud-experience-in-sql.png" alt-text="Azure Security Center's security features for SQL are available from within Azure SQL":::
You can now see whether or not your subscriptions have the default Security Cent
Updates in October include: -- [Vulnerability assessment for on-premise and multicloud machines (preview)](#vulnerability-assessment-for-on-premise-and-multicloud-machines-preview)
+- [Vulnerability assessment for on-premises and multicloud machines (preview)](#vulnerability-assessment-for-on-premises-and-multicloud-machines-preview)
- [Azure Firewall recommendation added (preview)](#azure-firewall-recommendation-added-preview) - [Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix](#authorized-ip-ranges-should-be-defined-on-kubernetes-services-recommendation-updated-with-quick-fix) - [Regulatory compliance dashboard now includes option to remove standards](#regulatory-compliance-dashboard-now-includes-option-to-remove-standards) - [Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)](#microsoftsecuritysecuritystatuses-table-removed-from-azure-resource-graph-arg)
-### Vulnerability assessment for on-premise and multicloud machines (preview)
+### Vulnerability assessment for on-premises and multicloud machines (preview)
[Azure Defender for Servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys) now scans Azure Arc-enabled servers.
properties: {
Query that references SecurityStatuses: ```kusto
-SecurityResources
+SecurityResources
| where type == 'microsoft.security/securitystatuses' and properties.type == 'virtualMachine'
-| where name in ({vmnames})
+| where name in ({vmnames})
| project name, resourceGroup, policyAssesments = properties.policyAssessments, resourceRegion = location, id, resourceDetails = properties.resourceDetails ```
source =~ "aws", properties.additionalData.AzureResourceId,
source =~ "gcp", properties.additionalData.AzureResourceId, extract("^(.+)/providers/Microsoft.Security/assessments/.+$",1,id))))) | extend resourceGroup = tolower(tostring(split(resourceId, "/")[4]))
-| where resourceName in ({vmnames})
+| where resourceName in ({vmnames})
| project resourceName, resourceGroup, resourceRegion = location, id, resourceDetails = properties.additionalData ```
Custom policies are now part of the Security Center recommendations experience,
Create a custom initiative in Azure Policy, add policies to it and onboard it to Azure Security Center, and visualize it as recommendations.
-We've now also added the option to edit the custom recommendation metadata. Metadata options include severity, remediation steps, threats information, and more.
+We've now also added the option to edit the custom recommendation metadata. Metadata options include severity, remediation steps, threats information, and more.
Learn more about [enhancing your custom recommendations with detailed information](custom-security-policies.md#enhance-your-custom-recommendations-with-detailed-information).
Now, you can add standards such as:
- **Canada Federal PBMM** - **Azure CIS 1.1.0 (new)** (which is a more complete representation of Azure CIS 1.1.0)
-In addition, we've recently added the [Azure Security Benchmark](/security/benchmark/azure/introduction), the Microsoft-authored Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. Additional standards will be supported in the dashboard as they become available.
+In addition, we've recently added the [Azure Security Benchmark](/security/benchmark/azure/introduction), the Microsoft-authored Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. Additional standards will be supported in the dashboard as they become available.
Learn more about [customizing the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | | [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | | [Network-based security alerts](other-threat-protections.md#network-layer) | - | - |
-| [Just-in-time VM access](just-in-time-access-usage.md) | - | - |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö (Preview) | - |
| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | | [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | | [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö |
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
When signing into a preconfigured sensor for the first time, you'll need to perf
1. Select either **CyberX** or **Support**, and copy the unique identifier.
-1. Navigate to the Azure portal and select **Sites and Sensors**.
+1. Navigate to the Azure portal and select **Sites and Sensors**.
1. Select the **More Actions** drop down menu and select **Recover on-premises management console password**.
When signing into a preconfigured sensor for the first time, you'll need to perf
1. Select **Next**, and your user, and system-generated password for your management console will then appear. > [!NOTE]
- > When you sign in to a sensor or on-premise management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX, or Support user you will need to select that subscription. For more information on recovering a CyberX, or Support user password, see [Recover the password for the on-premises management console, or the sensor](how-to-create-and-manage-users.md#recover-the-password-for-the-on-premises-management-console-or-the-sensor).
+ > When you sign in to a sensor or on-premises management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX, or Support user you will need to select that subscription. For more information on recovering a CyberX, or Support user password, see [Recover the password for the on-premises management console, or the sensor](how-to-create-and-manage-users.md#recover-the-password-for-the-on-premises-management-console-or-the-sensor).
### Investigate a lack of traffic
-An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users. When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture.
+An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users. When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture.
### Check system performance
When a new sensor is deployed or a sensor is working slowly or not showing any a
1. In the Defender for IoT dashboard > **Overview**, make sure that `PPS > 0`. 1. In *Devices** check that devices are being discovered.
-1. In **Data Mining**, generate a report.
+1. In **Data Mining**, generate a report.
1. In **Trends & Statistics** window, create a dashboard. 1. In **Alerts**, check that the alert was created.
-### Investigate a lack of expected alerts
+### Investigate a lack of expected alerts
If the **Alerts** window doesn't show an alert that you expected, verify the following:
To connect a sensor controlled by the management console to NTP:
Sometimes ICS devices are configured with external IP addresses. These ICS devices are not shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image. Another indication of the same problem is when multiple internet-related alerts appear. Fix the issue as follows:
-1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
+1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
1. Copy the public ranges that are private, and add them to the subnet list. Learn more about [configuring subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets). 1. Generate a new data-mining report for internet connections. 1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
If an expected alert is not shown in the **Alerts** window, verify the following
- Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert has not been handled yet, a new alert is not shown. -- Verify that you did not exclude this alert by using the **Alert Exclusion** rules in the on-premises management console.
+- Verify that you did not exclude this alert by using the **Alert Exclusion** rules in the on-premises management console.
### Tweak the Quality of Service (QoS) To save your network resources, you can limit the number of alerts sent to external systems (such as emails or SIEM) in one sync operation between an appliance and the on-premises management console.
-The default is 50. This means that in one communication session between an appliance and the on-premises management console, there will be no more than 50 alerts to external systems.
+The default is 50. This means that in one communication session between an appliance and the on-premises management console, there will be no more than 50 alerts to external systems.
To limit the number of alerts, use the `notifications.max_number_to_report` property available in `/var/cyberx/properties/management.properties`. No restart is needed after you change this property.
digital-twins Quickstart 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-3d-scenes-studio.md
The scene will look like this:
You'll need an Azure subscription to complete this quickstart. If you don't have one already, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
-You'll also need to download a sample 3D file to use for the scene in this quickstart. [Select this link to download RobotArms.glb](https://cardboardresources.blob.core.windows.net/public/RobotArms.glb).
+You'll also need to download a sample glTF (Graphics Language Transmission Format) 3D file to use for the scene in this quickstart. [Select this link to download RobotArms.glb](https://cardboardresources.blob.core.windows.net/public/RobotArms.glb).
## Set up Azure Digital Twins and sample data
You may also want to delete the downloaded sample 3D file from your local machin
Next, continue on to the Azure Digital Twins tutorials to build out your own Azure Digital Twins environment. > [!div class="nextstepaction"]
-> [Code a client app](tutorial-code.md)
+> [Code a client app](tutorial-code.md)
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
Title: "PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS"
-description: Learn to migrate an on-premise MySQL database to Azure Database for MySQL by using Azure Database Migration Service through PowerShell script.
+description: Learn to migrate an on-premises MySQL database to Azure Database for MySQL by using Azure Database Migration Service through PowerShell script.
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
Last updated 04/28/2022
-# Partner Events overview for partners - Azure Event Grid (preview)
+# Partner Events overview for partners - Azure Event Grid
Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Azure Storage, Outlook, Teams, or Azure AD, to partner systems where customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale. > [!NOTE]
Registrations are global. That is, they aren't associated with a particular Azur
### Channel A Channel is a nested resource to a Partner Namespace. A channel has two main purposes:
- - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is the customer's resource where events from a partner system. Similarly, when a channel of type `partner destination` is created, a partner destination is created on a customer's Azure subscription. Partner destinations are resources that represent a partner system endpoint to where events are delivered. A channel is the kind of resource, along with partner topics and partner destinations, that enable bi-directional event integration.
+ - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is the customer's resource where events from a partner system. Similarly, when a channel of type `partner destination` is created, a partner destination is created on a customer's Azure subscription. Partner destinations are resources that represent a partner system endpoint to where events are delivered. A channel along with partner topics and partner destinations enables bi-directional event integration.
A channel has the same lifecycle as its associated customer partner topic or destination. When a channel of type `partner topic` is deleted, for example, the associated customer's partner topic is deleted. Similarly, if the partner topic is deleted by the customer, the associated channel on your Azure subscription is deleted. - It's a resource that is used to route events. A channel of type ``partner topic`` is used to route events to a customer's partner topic. It supports two types of routing modes.
You have two options:
## References
- * [Swagger](https://github.com/ahamad-MS/azure-rest-api-specs/blob/master/specification/eventgrid/resource-manager/Microsoft.EventGrid/preview/2020-04-01-preview/EventGrid.json)
+ * [Swagger](https://github.com/ahamad-MS/azure-rest-api-specs/blob/main/specification/eventgrid/resource-manager/Microsoft.EventGrid/stable/2022-06-15/EventGrid.json)
* [ARM template](/azure/templates/microsoft.eventgrid/allversions)
- * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/master/schemas/2020-04-01-preview/Microsoft.EventGrid.json)
- * [REST APIs](/azure/templates/microsoft.eventgrid/2020-04-01-preview/partnernamespaces)
+ * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/main/schemas/2022-06-15/Microsoft.EventGrid.json)
+ * [REST APIs](/rest/api/eventgrid/controlplane-version2021-10-15-preview/partner-namespaces)
* [CLI extension](/cli/azure/eventgrid) ### SDKs * [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.EventGrid/5.3.1-preview)
- * [Python](https://pypi.org/project/azure-mgmt-eventgrid/3.0.0rc6/)
- * [Java](https://search.maven.org/artifact/com.microsoft.azure.eventgrid.v2020_04_01_preview/azure-mgmt-eventgrid/1.0.0-beta-3/jar)
- * [Ruby](https://rubygems.org/gems/azure_mgmt_event_grid/versions/0.19.0)
- * [JS](https://www.npmjs.com/package/@azure/arm-eventgrid/v/7.0.0)
+ * [Python](https://pypi.org/project/azure-mgmt-eventgrid/)
+ * [Java](https://search.maven.org/search?q=azure-mgmt-eventgrid)
+ * [Ruby](https://rubygems.org/gems/azure_mgmt_event_grid/)
+ * [JS](https://www.npmjs.com/package/@azure/arm-eventgrid)
* [Go](https://github.com/Azure/azure-sdk-for-go)
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
Last updated 03/31/2022
-# Partner Events overview for customers - Azure Event Grid (preview)
+# Partner Events overview for customers - Azure Event Grid
Azure Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams.
event-hubs Event Hubs Auto Inflate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-auto-inflate.md
Title: Automatically scale up throughput units in Azure Event Hubs description: Enable Auto-inflate on a namespace to automatically scale up throughput units (standard tier). Previously updated : 05/26/2021 Last updated : 06/13/2022 # Automatically scale up Azure Event Hubs throughput units (standard tier)
For a premium Event Hubs namespace, the feature is automatically enabled. You ca
## Use Azure portal In the Azure portal, you can enable the feature when creating a standard Event Hubs namespace or after the namespace is created. You can also set TUs for the namespace and specify maximum limit of TUs
-You can enable the Auto-inflate feature **when creating an Event Hubs namespace**. The follow image shows you how to enable the auto-inflate feature for a standard tier namespace and configure TUs to start with and the maximum number of TUs.
+You can enable the Auto-inflate feature **when creating an Event Hubs namespace**. The following image shows you how to enable the auto-inflate feature for a standard tier namespace and configure TUs to start with and the maximum number of TUs.
:::image type="content" source="./media/event-hubs-auto-inflate/event-hubs-auto-inflate.png" alt-text="Screenshot of enabling auto inflate at the time event hub creation for a standard tier namespace"::: With this option enabled, you can start small with your TUs and scale up as your usage needs increase. The upper limit for inflation doesn't immediately affect pricing, which depends on the number of TUs used per hour.
-To enable the Auto-inflate feature and modify its settings for an existing, follow these steps:
+To enable the Auto-inflate feature and modify its settings for an existing namespace, follow these steps:
1. On the **Event Hubs Namespace** page, select **Scale** under **Settings** on the left menu. 2. In the **Scale Settings** page, select the checkbox for **Enable** (if the autoscale feature wasn't enabled).
To enable the Auto-inflate feature and modify its settings for an existing, foll
## Use an Azure Resource Manager template
-You can enable Auto-inflate during an Azure Resource Manager template deployment. For example, set the
+You can enable the Auto-inflate feature during an Azure Resource Manager template deployment. For example, set the
`isAutoInflateEnabled` property to **true** and set `maximumThroughputUnits` to 10. For example: ```json
firewall-manager Rule Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/rule-hierarchy.md
Title: Use Azure Firewall policy to define a rule hierarchy
-description: Learn how to use Azure Firewall policy to define a rule hierarchy and enforce compliance.
+description: Learn how to use Azure Firewall policy to define a rule hierarchy and enforce compliance.
# Use Azure Firewall policy to define a rule hierarchy
-Security administrators need to manage firewalls and ensure compliance across on-premise and cloud deployments. A key component is the ability to provide application teams with flexibility to implement CI/CD pipelines to create firewall rules in an automated way.
+Security administrators need to manage firewalls and ensure compliance across on-premises and cloud deployments. A key component is the ability to provide application teams with flexibility to implement CI/CD pipelines to create firewall rules in an automated way.
Azure Firewall policy allows you to define a rule hierarchy and enforce compliance: - Provides a hierarchical structure to overlay a central base policy on top of a child application team policy. The base policy has a higher priority and runs before the child policy.-- Use an Azure custom role definition to prevent inadvertent base policy removal and provide selective access to rule collection groups within a subscription or resource group.
+- Use an Azure custom role definition to prevent inadvertent base policy removal and provide selective access to rule collection groups within a subscription or resource group.
## Solution overview The high-level steps for this example are:
-1. Create a base firewall policy in the security team resource group.
+1. Create a base firewall policy in the security team resource group.
3. Define IT security-specific rules in the base policy. This adds a common set of rules to allow/deny traffic.
-4. Create application team policies that inherit the base policy.
+4. Create application team policies that inherit the base policy.
5. Define application team-specific rules in the policy. You can also migrate rules from pre-existing firewalls. 6. Create Azure Active Directory custom roles to provide fine grained access to rule collection group and add roles at a Firewall Policy scope. In the following example, Sales team members can edit rule collection groups for the Sales teams Firewall Policy. The same applies to the Database and Engineering teams. 7. Associate the policy to the corresponding firewall. An Azure firewall can have only one assigned policy. This requires each application team to have their own firewall.
Create policies for each of the application teams:
:::image type="content" source="media/rule-hierarchy/policy-hierarchy.png" alt-text="Policy hierarchy" border="false":::
-### Create custom roles to access the rule collection groups
+### Create custom roles to access the rule collection groups
Custom roles are defined for each application team. The role defines operations and scope. The application teams are allowed to edit rule collection groups for their respective applications.
Use the following high-level procedure to define custom roles:
2. Run the following command: `Get-AzProviderOperation "Microsoft.Support/*" | FT Operation, Description -AutoSize`
-3. Use the Get-AzRoleDefinition command to output the Reader role in JSON format.
+3. Use the Get-AzRoleDefinition command to output the Reader role in JSON format.
`Get-AzRoleDefinition -Name "Reader" | ConvertTo-Json | Out-File C:\CustomRoles\ReaderSupportRole.json` 4. Open the ReaderSupportRole.json file in an editor.
Use the following high-level procedure to define custom roles:
The following shows the JSON output. For information about the different properties, seeΓÇ»[Azure custom roles](../role-based-access-control/custom-roles.md). ```json
- {
-ΓÇ» "Name": "Reader",
-ΓÇ» "Id": "acdd72a7-3385-48ef-bd42-f606fba81ae7",
-ΓÇ» "IsCustom": false,
-ΓÇ» "Description": "Lets you view everything, but not make any changes.",
-ΓÇ» "Actions": [
-     "*/read"
-ΓÇ» ],
-ΓÇ» "NotActions": [],
-ΓÇ» "DataActions": [],
-ΓÇ» "NotDataActions": [],
-ΓÇ» "AssignableScopes": [
-     "/"
-ΓÇ» ]
- }
+ {
+ΓÇ» "Name": "Reader",
+ΓÇ» "Id": "acdd72a7-3385-48ef-bd42-f606fba81ae7",
+ΓÇ» "IsCustom": false,
+ΓÇ» "Description": "Lets you view everything, but not make any changes.",
+ΓÇ» "Actions": [
+     "*/read"
+ΓÇ» ],
+ΓÇ» "NotActions": [],
+ΓÇ» "DataActions": [],
+ΓÇ» "NotDataActions": [],
+ΓÇ» "AssignableScopes": [
+     "/"
+ΓÇ» ]
+ }
``` 5. Edit the JSON file to add theΓÇ»
- `*/read", "Microsoft.Network/*/read", "Microsoft.Network/firewallPolicies/ruleCollectionGroups/write`
+ `*/read", "Microsoft.Network/*/read", "Microsoft.Network/firewallPolicies/ruleCollectionGroups/write`
operation to the **Actions** property. Be sure to include a comma after the read operation. This action allows the user to create and update rule collection groups. 6. In **AssignableScopes**, add your subscription ID with the following format: 
Use the following high-level procedure to define custom roles:
Your JSON file should look similar to the following example: ```
-{
-
-    "Name":  "AZFM Rule Collection Group Author",
-    "IsCustom":  true,
-    "Description":  "Users in this role can edit Firewall Policy rule collection groups",
-    "Actions":  [
-                    "*/read",
-                    "Microsoft.Network/*/read",
-                     "Microsoft.Network/firewallPolicies/ruleCollectionGroups/write"
-                ],
-    "NotActions":  [
-                   ],
-    "DataActions":  [
-                    ],
-    "NotDataActions":  [
-                       ],
-    "AssignableScopes":  [
-                             "/subscriptions/xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx"]
-}
+{
+
+    "Name":  "AZFM Rule Collection Group Author",
+    "IsCustom":  true,
+    "Description":  "Users in this role can edit Firewall Policy rule collection groups",
+    "Actions":  [
+                    "*/read",
+                    "Microsoft.Network/*/read",
+                     "Microsoft.Network/firewallPolicies/ruleCollectionGroups/write"
+                ],
+    "NotActions":  [
+                   ],
+    "DataActions":  [
+                    ],
+    "NotDataActions":  [
+                       ],
+    "AssignableScopes":  [
+                             "/subscriptions/xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx"]
+}
```
-9. To create the new custom role, use the New-AzRoleDefinition command and specify the JSON role definition file.
+9. To create the new custom role, use the New-AzRoleDefinition command and specify the JSON role definition file.
`New-AzRoleDefinition -InputFile "C:\CustomRoles\RuleCollectionGroupRole.json`
Users donΓÇÖt have permissions to:
- Update firewall policy hierarchy or DNS settings or threat intelligence. - Update firewall policy where they are not members of AZFM Rule Collection Group Author group.
-Security administrators can use base policy to enforce guardrails and block certain types of traffic (for example ICMP) as required by their enterprise.
+Security administrators can use base policy to enforce guardrails and block certain types of traffic (for example ICMP) as required by their enterprise.
## Next steps
firewall-manager Secure Hybrid Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-hybrid-network.md
Title: 'Tutorial: Secure your hub virtual network using Azure Firewall Manager'
-description: In this tutorial, you learn how to secure your virtual network with Azure Firewall Manager using the Azure portal.
+description: In this tutorial, you learn how to secure your virtual network with Azure Firewall Manager using the Azure portal.
In this tutorial, you learn how to:
## Prerequisites
-A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure VNets and on-premise networks. The hub-and-spoke architecture has the following requirements:
+A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure VNets and on-premises networks. The hub-and-spoke architecture has the following requirements:
-- Set **AllowGatewayTransit** when peering VNet-Hub to VNet-Spoke. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.
+- Set **AllowGatewayTransit** when peering VNet-Hub to VNet-Spoke. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.
Additionally, routes to the gateway-connected virtual networks or on-premises networks will automatically propagate to the routing tables for the peered virtual networks using the gateway transit. For more information, see [Configure VPN gateway transit for virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md).
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **Destination Type**, select **IP Address**. 1. For **Destination**, type **10.6.0.0/16**. 1. On the next rule row, enter the following information:
-
+ Name: type **AllowRDP**<br> Source: type **192.168.1.0/24**.<br> Protocol, select **TCP**<br>
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **IPv4 address space**, type **10.5.0.0/16**. 1. Under **Subnet name**, select **default**. 1. Change the **Subnet name** to **AzureFirewallSubnet**. The firewall is in this subnet, and the subnet name **must** be AzureFirewallSubnet.
-1. For **Subnet address range**, type **10.5.0.0/26**.
+1. For **Subnet address range**, type **10.5.0.0/26**.
1. Accept the other default settings, and then select **Save**. 1. Select **Review + create**. 1. Select **Create**.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **IPv4 address space**, type **10.6.0.0/16**. 1. Under **Subnet name**, select **default**. 1. Change the **Subnet name** to **SN-Workload**.
-1. For **Subnet address range**, type **10.6.0.0/24**.
+1. For **Subnet address range**, type **10.6.0.0/24**.
1. Accept the other default settings, and then select **Save**. 1. Select **Review + create**. 1. Select **Create**.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **IPv4 address space**, type **192.168.0.0/16**. 1. Under **Subnet name**, select **default**. 1. Change the **Subnet name** to **SN-Corp**.
-1. For **Subnet address range**, type **192.168.1.0/24**.
+1. For **Subnet address range**, type **192.168.1.0/24**.
1. Accept the other default settings, and then select **Save**. 2. Select **Add Subnet**. 3. For **Subnet name**, type **GatewaySubnet**.
Now peer the hub and spoke virtual networks.
2. In the left column, select **Peerings**. 3. Select **Add**. 4. Under **This virtual network**:
-
-
++ |Setting name |Value | ||| |Peering link name| HubtoSpoke| |Traffic to remote virtual network| Allow (default) | |Traffic forwarded from remote virtual network | Allow (default) | |Virtual network gateway or route server | Use this virtual network's gateway |
-
+ 5. Under **Remote virtual network**: |Setting name |Value |
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
+
+ Title: Create an Azure Front Door Standard/Premium with the Azure CLI
+description: Learn how to create an Azure Front Door Standard/Premium with Azure CLI. Use Azure Front Door to deliver content to your global user base and protect your web apps against vulnerabilities.
++++ Last updated : 6/13/2022++++
+# Quickstart: Create an Azure Front Door Standard/Premium - Azure CLI
+
+In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using Azure CLI. You'll create this profile using two Web Apps as your origin, and add a WAF security policy. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
+++
+## Create a resource group
+
+In Azure, you allocate related resources to a resource group. You can either use an existing resource group or create a new one.
+
+Run [az group create](/cli/azure/group) to create resource groups.
+
+```azurecli
+az group create --name myRGFD --location centralus
+```
+## Create an Azure Front Door profile
+
+Run [az afd profile create](/cli/azure/afd/profile#az-afd-profile-create) to create an Azure Front Door profile.
+
+> [!NOTE]
+> If you want to deploy Azure Front Door Standard instead of Premium substitute the value of the sku parameter with Standard_AzureFrontDoor. You won't be able to deploy managed rules with WAF Policy, if you choose Standard SKU. For detailed comparison, view [Azure Front Door tier comparison](standard-premium/tier-comparison.md).
+
+```azurecli
+az afd profile create \
+ --profile-name contosoafd \
+ --resource-group myRGFD \
+ --sku Premium_AzureFrontDoor
+```
+
+## Create two instances of a web app
+
+You need two instances of a web application that run in different Azure regions for this tutorial. Both the web application instances run in Active/Active mode, so either one can service traffic.
+
+If you don't already have a web app, use the following script to set up two example web apps.
+
+### Create app service plans
+
+Before you can create the web apps you'll need two app service plans, one in *Central US* and the second in *East US*.
+
+Run [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create&preserve-view=true) to create your app service plans.
+
+```azurecli
+az appservice plan create \
+ --name myAppServicePlanCentralUS \
+ --resource-group myRGFD
+
+az appservice plan create \
+ --name myAppServicePlanEastUS \
+ --resource-group myRGFD
+```
+
+### Create web apps
+
+Run [az webapp create](/cli/azure/webapp#az-webapp-create&preserve-view=true) to create a web app in each of the app service plans in the previous step. Web app names have to be globally unique.
+
+```azurecli
+az webapp create \
+ --name WebAppContoso-01 \
+ --resource-group myRGFD \
+ --plan myAppServicePlanCentralUS
+
+az webapp create \
+ --name WebAppContoso-02 \
+ --resource-group myRGFD \
+ --plan myAppServicePlanEastUS
+```
+
+Make note of the default host name of each web app so you can define the backend addresses when you deploy the Front Door in the next step.
+
+## Add an endpoint
+
+Run [az afd endpoint create](/cli/azure/afd/endpoint#az-afd-endpoint-create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience.
+
+```azurecli
+az afd endpoint create \
+ --resource-group myRGFD \
+ --endpoint-name contosofrontend \
+ --profile-name contosoafd \
+ --enabled-state Enabled
+```
+
+## Create an origin group
+
+Run [az afd origin-group create](/cli/azure/afd/origin-group#az-afd-origin-group-create) to create an origin group that contains your two web apps.
+
+```azurecli
+az afd origin-group create \
+ --resource-group myRGFD \
+ --origin-group-name og2 \
+ --profile-name contosoafd \
+ --probe-request-type GET \
+ --probe-protocol Http \
+ --probe-interval-in-seconds 60 \
+ --probe-path / \
+ --sample-size 4 \
+ --successful-samples-required 3 \
+ --additional-latency-in-milliseconds 50
+```
+
+## Add an origin to the group
+
+Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to add an origin to your origin group.
+
+```azurecli
+az afd origin create \
+ --resource-group myRGFD \
+ --host-name webappcontoso-01.azurewebsites.net \
+ --profile-name contosoafd \
+ --origin-group-name og \
+ --origin-name contoso1 \
+ --origin-host-header webappcontoso-01.azurewebsites.net \
+ --priority 1 \
+ --weight 1000 \
+ --enabled-state Enabled \
+ --http-port 80 \
+ --https-port 443
+```
+
+Repeat this step and add your second origin.
+
+```azurecli
+az afd origin create \
+ --resource-group myRGFD \
+ --host-name webappcontoso-02.azurewebsites.net \
+ --profile-name contosoafd \
+ --origin-group-name og \
+ --origin-name contoso2 \
+ --origin-host-header webappcontoso-02.azurewebsites.net \
+ --priority 1 \
+ --weight 1000 \
+ --enabled-state Enabled \
+ --http-port 80 \
+ --https-port 443
+```
+
+## Add a route
+
+Run [az afd route create](/cli/azure/afd/route#az-afd-route-create) to map your endpoint to the origin group. This route forwards requests from the endpoint to your origin group.
+
+```azurecli
+az afd route create \
+ --resource-group myRGFD \
+ --profile-name contosoafd \
+ --endpoint-name contosofrontend \
+ --forwarding-protocol MatchRequest \
+ --route-name route \
+ --https-redirect Enabled \
+ --origin-group og \
+ --supported-protocols Http Https \
+ --link-to-default-domain Enabled
+```
+
+## Create a new security policy
+
+### Create a WAF policy
+
+Run [az network front-door waf-policy create](/cli/azure/network/front-door/waf-policy#az-network-front-door-waf-policy-create) to create a new WAF policy for your Front Door. This example creates a policy that is enabled and in prevention mode.
+
+> [!NOTE]
+> Managed rules will only work with Front Door Premium SKU. You can opt for Standard SKU below to use custom rules.
+
+```azurecli
+az network front-door waf-policy create \
+ --name contosoWAF \
+ --resource-group myRGFD \
+ --sku Premium_AzureFrontDoor \
+ --disabled false \
+ --mode Prevention
+```
+
+> [!NOTE]
+> If you select `Detection` mode, your WAF doesn't block any requests.
+
+### Assign managed rules to the WAF policy
+Run [az network front-door waf-policy managed-rules add](/cli/azure/network/front-door/waf-policy/managed-rules#az-network-front-door-waf-policy-managed-rules-add) to add managed rules to your WAF Policy. This example adds Microsoft_DefaultRuleSet_1.2 and Microsoft_BotManagerRuleSet_1.0 to your policy.
++
+```azurecli
+az network front-door waf-policy managed-rules add \
+ --policy-name contosoWAF \
+ --resource-group myRGFD \
+ --type Microsoft_DefaultRuleSet \
+ --version 1.2
+```
+
+```azurecli
+az network front-door waf-policy managed-rules add \
+ --policy-name contosoWAF \
+ --resource-group myRGFD \
+ --type Microsoft_BotManagerRuleSet \
+ --version 1.0
+```
+### Create the security policy
+
+Run [az afd security-policy create](/cli/azure/afd/security-policy#az-afd-security-policy-create) to apply your WAF policy to the endpoint's default domain.
+
+> [!NOTE]
+> Substitute 'mysubscription' with your Azure Subscription ID in the domains and waf-policy parameters below. Run [az account subscription list](/cli/azure/aaccount/subscription#az-account-subscription-list) to get Subscription ID details.
++
+```azurecli
+az afd security-policy create \
+ --resource-group myRGFD \
+ --profile-name contosoafd \
+ --security-policy-name contososecurity \
+ --domains /subscriptions/mysubscription/resourcegroups/myRGFD/providers/Microsoft.Cdn/profiles/contosoafd/afdEndpoints/contosofrontend \
+ --waf-policy /subscriptions/mysubscription/resourcegroups/myRGFD/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/contosoWAF
+```
+
+## Verify Azure Front Door
+
+When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created.
+
+Run [az afd endpoint show](/cli/azure/afd/endpoint#az-afd-endpoint-show) to get the hostname of the Front Door endpoint.
+
+```azurecli
+az afd endpoint show --resource-group myRGFD --profile-name contosoafd --endpoint-name contosofrontend
+```
+In a browser, go to the endpoint hostname: `contosofrontend-<hash>.z01.azurefd.net`. Your request will automatically get routed to the least latent Web App in the origin group.
++
+To test instant global failover, we'll use the following steps:
+
+1. Open a browser, as described above, and go to the endpoint hostname: `contosofrontend-<hash>.z01.azurefd.net`.
+
+2. Stop one of the Web Apps by running [az webapp stop](/cli/azure/webapp#az-webapp-stop&preserve-view=true)
+
+ ```azurecli
+ az webapp stop --name WebAppContoso-01 --resource-group myRGFD
+ ```
+
+3. Refresh your browser. You should see the same information page.
+
+> [!TIP]
+> There is a little bit of delay for these actions. You might need to refresh again.
+
+4. Find the other web app, and stop it as well.
+
+ ```azurecli
+ az webapp stop --name WebAppContoso-02 --resource-group myRGFD
+ ```
+
+5. Refresh your browser. This time, you should see an error message.
+
+ :::image type="content" source="./media/create-front-door-portal/web-app-stopped-message.png" alt-text="Screenshot of the message: Both instances of the web app stopped":::
++
+6. Restart one of the Web Apps by running [az webapp start](/cli/azure/webapp#az-webapp-start&preserve-view=true). Refresh your browser and the page will go back to normal.
+
+ ```azurecli
+ az webapp start --name WebAppContoso-01 --resource-group myRGFD
+ ```
+
+## Clean up resources
+
+When you don't need the resources for the Front Door, delete both resource groups. Deleting the resource groups also deletes the Front Door and all its related resources.
+
+Run [az group delete](/cli/azure/group#az-group-delete&preserve-view=true):
+
+```azurecli
+az group delete --name myRGFD
+```
+
+## Next steps
+
+Advance to the next article to learn how to add a custom domain to your Front Door.
+> [!div class="nextstepaction"]
+> [Add a custom domain](standard-premium/how-to-add-custom-domain.md)
frontdoor Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/origin.md
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-classic" > [!NOTE]
-> An *Origin* and a *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
+> *Origin* and *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
> ::: zone-end
For more information, see [Least latency based routing method](routing-methods.m
- Learn how to [create an Azure Front Door (classic) profile](quickstart-create-front-door.md). - Learn about [Azure Front Door (classic) routing architecture](front-door-routing-architecture.md?pivots=front-door-classic).
frontdoor Quickstart Create Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door.md
documentationcenter: na
Previously updated : 04/19/2021 Last updated : 06/08/2022
If you don't already have a web app, use the following steps to set up example w
1. Sign in to the Azure portal at https://portal.azure.com.
-1. On the top left-hand side of the screen, select **Create a resource** > **WebApp**.
+1. On the top left-hand side of the screen, select **Create a resource** > **Web App**.
- :::image type="content" source="media/quickstart-create-front-door/front-door-create-web-app.png" alt-text="Create a web app in the Azure portal":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-create-web-app.png" alt-text="Create a web app in the Azure portal." lightbox="./media/quickstart-create-front-door/front-door-create-web-app.png":::
1. In the **Basics** tab of **Create Web App** page, enter or select the following information.
If you don't already have a web app, use the following steps to set up example w
| **Resource group** | Select **Create new** and enter *FrontDoorQS_rg1* in the text box.| | **Name** | Enter a unique **Name** for your web app. This example uses *WebAppContoso-1*. | | **Publish** | Select **Code**. |
- | **Runtime stack** | Select **.NET Core 2.1 (LTS)**. |
+ | **Runtime stack** | Select **.NET Core 3.1 (LTS)**. |
| **Operating System** | Select **Windows**. | | **Region** | Select **Central US**. | | **Windows Plan** | Select **Create new** and enter *myAppServicePlanCentralUS* in the text box. |
If you don't already have a web app, use the following steps to set up example w
1. Select **Review + create**, review the **Summary**, and then select **Create**. It might take several minutes for the deployment to complete.
- :::image type="content" source="media/quickstart-create-front-door/create-web-app.png" alt-text="Review summary for web app":::
+ :::image type="content" source="media/quickstart-create-front-door/create-web-app.png" alt-text="Review summary for web app." lightbox="./media/quickstart-create-front-door/create-web-app.png":::
After your deployment is complete, create a second web app. Use the same procedure with the same values, except for the following values:
After your deployment is complete, create a second web app. Use the same procedu
Configure Azure Front Door to direct user traffic based on lowest latency between the two web apps servers. To begin, add a frontend host for Azure Front Door.
-1. From the home page or the Azure menu, select **Create a resource**. Select **Networking** > **See All** > **Front Door**.
-
+1. From the home page or the Azure menu, select **Create a resource**. Select **Networking** > **See All** > **Front Door and CDN profiles**.
+1. On the Compare offerings page, select **Explore other offerings**. Then select **Azure Front Door (classic)**. Then select **Continue**.
1. In the **Basics** tab of **Create a Front Door** page, enter or select the following information, and then select **Next: Configuration**. | Setting | Value |
Configure Azure Front Door to direct user traffic based on lowest latency betwee
1. For **Host name**, enter a globally unique hostname. This example uses *contoso-frontend*. Select **Add**.
- :::image type="content" source="media/quickstart-create-front-door/add-frontend-host-azure-front-door.png" alt-text="Add a frontend host for Azure Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/add-frontend-host-azure-front-door.png" alt-text="Add a frontend host for Azure Front Door." lightbox="./media/quickstart-create-front-door/add-frontend-host-azure-front-door.png":::
Next, create a backend pool that contains your two web apps.
Next, create a backend pool that contains your two web apps.
1. For **Name**, enter *myBackendPool*, then select **Add a backend**.
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool.png" alt-text="Add a backend pool":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool.png" alt-text="Add a backend pool." lightbox="./media/quickstart-create-front-door/front-door-add-backend-pool.png":::
-1. In the **Add a backend** blade, select the following information and select **Add**.
+1. In the **Add a backend** pane, select the following information and select **Add**.
| Setting | Value | | | |
Next, create a backend pool that contains your two web apps.
| **Subscription** | Select your subscription. | | **Backend host name** | Select the first web app you created. In this example, the web app was *WebAppContoso-1*. |
- **Leave all other fields default.*
+ **Leave all other fields default.**
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-backend.png" alt-text="Add a backend host to your Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-backend.png" alt-text="Add a backend host to your Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-a-backend.png":::
1. Select **Add a backend** again. select the following information and select **Add**.
Next, create a backend pool that contains your two web apps.
| **Subscription** | Select your subscription. | | **Backend host name** | Select the second web app you created. In this example, the web app was *WebAppContoso-2*. |
- **Leave all other fields default.*
+ **Leave all other fields default.**
-1. Select **Add** on the **Add a backend pool** blade to complete the configuration of the backend pool.
+1. Select **Add** on the **Add a backend pool** pane to complete the configuration of the backend pool.
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool-complete.png" alt-text="Add a backend pool for Azure Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool-complete.png" alt-text="Add a backend pool for Azure Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-backend-pool-complete.png":::
Finally, add a routing rule. A routing rule maps your frontend host to the backend pool. The rule forwards a request for `contoso-frontend.azurefd.net` to **myBackendPool**.
Finally, add a routing rule. A routing rule maps your frontend host to the backe
1. In **Add a rule**, for **Name**, enter *LocationRule*. Accept all the default values, then select **Add** to add the routing rule.
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-rule.png" alt-text="Add a rule to your Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-rule.png" alt-text="Add a rule to your Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-a-rule.png":::
>[!WARNING] > You **must** ensure that each of the frontend hosts in your Front Door has a routing rule with a default path (`/*`) associated with it. That is, across all of your routing rules there must be at least one routing rule for each of your frontend hosts defined at the default path (`/*`). Failing to do so may result in your end-user traffic not getting routed correctly. 1. Select **Review + Create**, and then **Create**.
- :::image type="content" source="media/quickstart-create-front-door/configuration-azure-front-door.png" alt-text="Configured Azure Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/configuration-azure-front-door.png" alt-text="Configured Azure Front Door." lightbox="./media/quickstart-create-front-door/configuration-azure-front-door.png":::
+ ## View Azure Front Door in action
-Once you create a Front Door, it takes a few minutes for the configuration to be deployed globally. Once complete, access the frontend host you created. In a browser, go to `contoso-frontend.azurefd.net`. Your request will automatically get routed to the nearest server to you from the specified servers in the backend pool.
+Once you create a Front Door, it takes a few minutes for the configuration to be deployed globally. Once complete, access the frontend host you created. In a browser, go to your frontend host address. Your request will automatically get routed to the nearest server to you from the specified servers in the backend pool.
If you created these apps in this quickstart, you'll see an information page. To test instant global failover in action, try the following steps:
-1. Open a browser, as described above, and go to the frontend address: `contoso-frontend.azurefd.net`.
+1. Open the resource group **FrontDoorQS_rg0** and select the frontend service.
+
+ :::image type="content" source="./media/quickstart-create-front-door/front-door-view-frontend-service.png" alt-text="Screenshot of frontend service." lightbox="./media/quickstart-create-front-door/front-door-view-frontend-service.png":::
+
+1. From the **Overview** page, copy the **Frontend host** address.
+
+ :::image type="content" source="./media/quickstart-create-front-door/front-door-view-frontend-host-address.png" alt-text="Screenshot of frontend host address." lightbox="./media/quickstart-create-front-door/front-door-view-frontend-host-address.png":::
+
+1. Open a browser, as described above, and go to your frontend address.
1. In the Azure portal, search for and select *App services*. Scroll down to find one of your web apps, **WebAppContoso-1** in this example.
To test instant global failover in action, try the following steps:
1. Refresh your browser. This time, you should see an error message.
- :::image type="content" source="media/quickstart-create-front-door/web-app-stopped-message.png" alt-text="Both instances of the web app stopped":::
+ :::image type="content" source="media/quickstart-create-front-door/web-app-stopped-message.png" alt-text="Both instances of the web app stopped." lightbox="./media/quickstart-create-front-door/web-app-stopped-message.png":::
## Clean up resources
After you're done, you can remove all the items you created. Deleting a resource
1. Select the resource group, then select **Delete resource group**. >[!WARNING]
- >This action is irreversable.
+ >This action is irreversible.
1. Type the resource group name to verify, and then select **Delete**.
frontdoor Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/routing-methods.md
The weighted method enables some useful scenarios:
By default, without session affinity, Azure Front Door forwards requests originating from the same client to different origins. Certain stateful applications or in certain scenarios when ensuing requests from the same user prefers the same origin to process the initial request. The cookie-based session affinity feature is useful when you want to keep a user session on the same origin. When you use managed cookies with SHA256 of the origin URL as the identifier in the cookie, Azure Front Door can direct ensuing traffic from a user session to the same origin for processing.
-Session affinity can be enabled the origin group level in Azure Front Door Standard and Premium tier and front end host level in Azure Front Door (classic) for each of your configured domains (or subdomains). Once enabled, Azure Front Door adds a cookie to the user's session. Cookie-based session affinity allows Front Door to identify different users even if behind the same IP address, which in turn allows a more even distribution of traffic between your different origins.
+Session affinity can be enabled the origin group level in Azure Front Door Standard and Premium tier and front end host level in Azure Front Door (classic) for each of your configured domains (or subdomains). Once enabled, Azure Front Door adds a cookie to the user's session. The cookies are called ASLBSA and ASLBSACORS. Cookie-based session affinity allows Front Door to identify different users even if behind the same IP address, which in turn allows a more even distribution of traffic between your different origins.
The lifetime of the cookie is the same as the user's session, as Front Door currently only supports session cookie.
The lifetime of the cookie is the same as the user's session, as Front Door curr
> > Public proxies may interfere with session affinity. This is because establishing a session requires Front Door to add a session affinity cookie to the response, which cannot be done if the response is cacheable as it would disrupt the cookies of other clients requesting the same resource. To protect against this, session affinity will **not** be established if the origin sends a cacheable response when this is attempted. If the session has already been established, it does not matter if the response from the origin is cacheable. >
-> Session affinity will be established in the following circumstances, **unless** the response has an HTTP 304 status code:
-> - The response has specific values set for the `Cache-Control` header that prevents caching, such as *private* or *no-store*.
-> - The response contains an `Authorization` header that has not expired.
-> - The response has an HTTP 302 status code.
+> Session affinity will be established in the following circumstances:
+> - The response must include the `Cache-Control` header of *no-store*.
+> - If the response contains an `Authorization` header, it must not be expired.
+> - The response is an HTTP 302 status code.
## Next steps
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md
Previously updated : 03/22/2022 Last updated : 06/09/2022 zone_pivot_groups: front-door-tiers
The **request path** match condition identifies requests that include the specif
### Properties +
+| Property | Supported values |
+|-|-|
+| Operator | <ul><li>Any operator from the [standard operator list](#operator-list).</li><li>**Wildcard**: Matches when the request path matches a wildcard expression. A wildcard expression can include the `*` character to match zero or more characters within the path. For example, the wildcard expression `files/customer*/file.pdf` matches the paths `files/customer1/file.pdf`, `files/customer109/file.pdf`, and `files/customer/file.pdf`, but does not match `files/customer2/anotherfile.pdf`.<ul><li>In the Azure portal: `Wildcards`, `Not Wildcards`</li><li>In ARM templates: `Wildcard`; use the `negateCondition` property to specify _Not Wildcards_</li></ul></li></ul> |
+| Value | One or more string or integer values representing the value of the request path to match. Don't include the leading slash. If multiple values are specified, they're evaluated using OR logic. |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
+++ | Property | Supported values | |-|-| | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the request path to match. Don't include the leading slash. If multiple values are specified, they're evaluated using OR logic. | | Case transform | Any transform from the [standard string transforms list](#string-transform-list). | + ### Example In this example, we match all requests where the request file path begins with `files/secure/`. We transform the request file extension to lowercase before evaluating the match, so requests to `files/SECURE/` and other case variations will also trigger this match condition.
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 03/08/2022 Last updated : 06/09/2022 ++ # Azure Resource Graph sample queries by category
hdinsight Control Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/control-network-traffic.md
If you plan on using **network security groups** to control network traffic, per
1. Identify the Azure region that you plan to use for HDInsight. 2. Identify the service tags required by HDInsight for your region. There are multiple ways to obtain these service tags:
- 1. Consult the list of published service tags in [Network security group (NSG) service tags for Azure HDInsight](hdinsight-service-tags.md).
+ 1. Consult the list of published service tags in [Network security group (NSG) service tags for Azure HDInsight](hdinsight-service-tags.md).
2. If your region is not present in the list, use the [Service Tag Discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) to find a service tag for your region. 3. If you are unable to use the API, download the [service tag JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) and search for your desired region.
For more information on controlling outbound traffic from HDInsight clusters, se
### Forced tunneling to on-premises
-Forced tunneling is a user-defined routing configuration where all traffic from a subnet is forced to a specific network or location, such as your on-premises network or Firewall. Forced tunneling of all data transfer back to on-premise is _not_ recommended due to large volumes of data transfer and potential performance impact.
+Forced tunneling is a user-defined routing configuration where all traffic from a subnet is forced to a specific network or location, such as your on-premises network or Firewall. Forced tunneling of all data transfer back to on-premises is _not_ recommended due to large volumes of data transfer and potential performance impact.
-Customers who are interested to setup forced tunneling, should use [custom metastores](./hdinsight-use-external-metadata-stores.md) and setup the appropriate connectivity from the cluster subnet or on-premise network to these custom metastores.
+Customers who are interested to setup forced tunneling, should use [custom metastores](./hdinsight-use-external-metadata-stores.md) and setup the appropriate connectivity from the cluster subnet or on-premises network to these custom metastores.
To see an example of the UDR setup with Azure Firewall, see [Configure outbound network traffic restriction for Azure HDInsight clusters](hdinsight-restrict-outbound-traffic.md).
hdinsight General Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/general-guidelines.md
Use a new resource group for each cluster so that you can distinguish between cl
[Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md) (Azure AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos / NTLM authentication that is fully compatible with Windows Server Active Directory. Azure AD DS is required for secure clusters to join a domain.
-HDInsight can't depend on on-premise domain controllers or custom domain controllers, as it introduces too many fault points, credential sharing, DNS permissions, and so on. For more information, see [Azure AD DS FAQs](../../active-directory-domain-services/faqs.yml).
+HDInsight can't depend on on-premises domain controllers or custom domain controllers, as it introduces too many fault points, credential sharing, DNS permissions, and so on. For more information, see [Azure AD DS FAQs](../../active-directory-domain-services/faqs.yml).
### Azure AD DS instance
For more information, see [Azure AD UserPrincipalName population](../../active-d
### Password hash sync * Passwords are synced differently from other object types. Only non-reversible password hashes are synced in Azure AD and Azure AD DS
-* On-premise to Azure AD has to be enabled through AD Connect
+* On-premises to Azure AD has to be enabled through AD Connect
* Azure AD to Azure AD DS sync is automatic (latencies are under 20 minutes). * Password hashes are synced only when there's a changed password. When you enable password hash sync, all existing passwords don't get synced automatically as they're stored irreversibly. When you change the password, password hashes get synced.
healthcare-apis Iot Connector Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-power-bi.md
Title: MedTech service Microsoft Power BI - Azure Health Data Services
-description: In this article, you'll learn how to use the MedTech service and Power BI
+description: In this article, you'll learn how to use the MedTech service and Power BI
In this article, we'll explore using the MedTech service and Microsoft Power Bus
## MedTech service and Power BI reference architecture
-The reference architecture below shows the basic components of using Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR&#174;) data.
+The reference architecture below shows the basic components of using Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR&#174;) data.
You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). :::image type="content" source="media/iot-concepts/iot-connector-power-bi.png" alt-text="Screenshot of the MedTech service and Power BI." lightbox="media/iot-concepts/iot-connector-power-bi.png":::
-MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud.
+MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud.
We do encourage the use of Azure IoT services to assist with device/gateway connectivity.
We do encourage the use of Azure IoT services to assist with device/gateway conn
For some solutions, Azure IoT Central can be used in place of Azure IoT Hub.
-Azure IoT Edge can be used in with IoT Hub to create an on-premise endpoint for devices and/or in-device connectivity.
+Azure IoT Edge can be used in with IoT Hub to create an on-premises endpoint for devices and/or in-device connectivity.
:::image type="content" source="media/iot-concepts/iot-connector-iot-edge-power-bi.png" alt-text="Screenshot of the MedTech service, IoT Hub, IoT Edge, and Power BI." lightbox="media/iot-concepts/iot-connector-iot-edge-power-bi.png":::
healthcare-apis Iot Connector Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-teams.md
Title: MedTech service and Teams notifications - Azure Health Data Services
-description: In this article, you'll learn how to use the MedTech service and Teams notifications
+description: In this article, you'll learn how to use the MedTech service and Teams notifications
In this article, we'll explore using the MedTech service and Microsoft Teams for
## MedTech service and Teams notifications reference architecture
-When combining MedTech service, a Fast Healthcare Interoperability Resources (FHIR&#174;) service, and Teams, you can enable multiple care solutions.
+When combining MedTech service, a Fast Healthcare Interoperability Resources (FHIR&#174;) service, and Teams, you can enable multiple care solutions.
-Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, FHIR, and Teams Patient App.
+Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, FHIR, and Teams Patient App.
You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). :::image type="content" source="media/iot-concepts/iot-connector-teams.png" alt-text="Screenshot of the MedTech service and Teams." lightbox="media/iot-concepts/iot-connector-teams.png":::
-The MedTech service for can ingest IoT data from most IoT devices or gateways regardless of location, data center, or cloud.
+The MedTech service for can ingest IoT data from most IoT devices or gateways regardless of location, data center, or cloud.
We do encourage the use of Azure IoT services to assist with device/gateway connectivity.
We do encourage the use of Azure IoT services to assist with device/gateway conn
For some solutions, Azure IoT Central can be used in place of Azure IoT Hub.
-Azure IoT Edge can be used in with IoT Hub to create an on-premise end point for devices and/or in-device connectivity.
+Azure IoT Edge can be used in with IoT Hub to create an on-premises end point for devices and/or in-device connectivity.
:::image type="content" source="media/iot-concepts/iot-connector-iot-edge-teams.png" alt-text="Screenshot of the MedTech service and IoT Edge." lightbox="media/iot-concepts/iot-connector-iot-edge-teams.png":::
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md
Title: Move from IoT Central to a PaaS solution | Microsoft Docs description: How do I move between aPaaS and PaaS solution approaches?--++ Last updated 06/09/2022
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
Title: Take a tour of the Azure IoT Central API | Microsoft Docs
description: Become familiar with the key areas of the Azure IoT Central REST API. Use the API to create, manage, and use your IoT solution from client applications. Previously updated : 01/25/2022 Last updated : 06/10/2022
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md
Title: Take a tour of the Azure IoT Central UI | Microsoft Docs description: Become familiar with the key areas of the Azure IoT Central UI that you use to create, manage, and use your IoT solution.-- Previously updated : 12/21/2021++ Last updated : 06/10/2022
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
Title: What is Azure IoT Central | Microsoft Docs
description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. It helps to reduce the burden and cost of IoT management operations, and development. This article provides an overview of the features of Azure IoT Central. Previously updated : 12/22/2021 Last updated : 06/09/2022
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-configure-rules.md
Title: Quickstart - Configure rules and actions in Azure IoT Central
description: This quickstart shows you how to configure telemetry-based rules and actions in your IoT Central application. Previously updated : 12/22/2021 Last updated : 06/10/2022
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
Title: Quickstart - Connect a device to an Azure IoT Central application | Micro
description: Quickstart - Connect your first device to a new IoT Central application. This quickstart uses a smartphone app from either the Google Play or Apple app store as an IoT device. Previously updated : 01/13/2022 Last updated : 06/08/2022
iot-central Troubleshoot Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-data-export.md
description: Troubleshoot issues with data exports in IoT Central
Previously updated : 10/26/2021 Last updated : 06/10/2022
iot-dps How To Troubleshoot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md
Use this table to understand and resolve common errors.
|-||| | 400 | The body of the request is not valid; for example, it cannot be parsed, or the object cannot be validated.| 400 Bad format | | 401 | The authorization token cannot be validated; for example, it is expired or does not apply to the requestΓÇÖs URI. This error code is also returned to devices as part of the TPM attestation flow. | 401 Unauthorized|
-| 404 | The Device Provisioning Service instance, or a resource (e.g. an enrollment) does not exist. |404 Not Found |
+| 404 | The Device Provisioning Service instance, or a resource (e.g. an enrollment) does not exist. | 404 Not Found|
+| 405 | The client service knows the request method, but the target service doesn't recognize this method; for example, a rest operations is missing the enrollment or registration Id parameters | 405 Method Not Allowed |
+| 409 | The request could not be completed due to a conflict with the current state of the target Device Provisioning Service instance; for example, the customer has already created the data point and is attempting to recreate the same datapoint again. | 409 Conflict |
| 412 | The ETag in the request does not match the ETag of the existing resource, as per RFC7232. | 412 Precondition failed |
+| 415 | The server refuses to accept the request because the payload format is in an unsupported format. For supported formats, see [Iot Hub Device Provisioning Service REST API](/rest/api/iot-dps/) | 415 Unsupported Media Type |
| 429 | Operations are being throttled by the service. For specific service limits, see [IoT Hub Device Provisioning Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#iot-hub-device-provisioning-service-limits). | 429 Too many requests | | 500 | An internal error occurred. | 500 Internal Server Error|
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Azure IoT Edge for Linux on Windows supports the following architectures:
| EFLOW 1.1 LTS | ![AMD64](./media/support/green-check.png) | | | EFLOW Continuous Release (CR) ([Public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)) | ![AMD64](./media/support/green-check.png) | ![ARM64](./media/support/green-check.png) |
+For more information about Windows ARM64 supported processors, see [Windows Processor Requirements](/windows-hardware/design/minimum/windows-processor-requirements).
## Virtual machines
iot-hub Iot Hub Devguide C2d Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-c2d-guidance.md
Here is a detailed comparison of the various cloud-to-device communication optio
| Data flow | Two-way. The device app can respond to the method right away. The solution back end receives the outcome contextually to the request. | One-way. The device app receives a notification with the property change. | One-way. The device app receives the message | Durability | Disconnected devices are not contacted. The solution back end is notified that the device is not connected. | Property values are preserved in the device twin. Device will read it at next reconnection. Property values are retrievable with the [IoT Hub query language](iot-hub-devguide-query-language.md). | Messages can be retained by IoT Hub for up to 48 hours. | | Targets | Single device using **deviceId**, or multiple devices using [jobs](iot-hub-devguide-jobs.md). | Single device using **deviceId**, or multiple devices using [jobs](iot-hub-devguide-jobs.md). | Single device by **deviceId**. |
-| Size | Maximum direct method payload size is 128 KB. | Maximum desired properties size is 32 KB. | Up to 64 KB messages. |
+| Size | Maximum direct method payload size is 128 KB for the request and 128 KB for the response. | Maximum desired properties size is 32 KB. | Up to 64 KB messages. |
| Frequency | High. For more information, see [IoT Hub limits](iot-hub-devguide-quotas-throttling.md). | Medium. For more information, see [IoT Hub limits](iot-hub-devguide-quotas-throttling.md). | Low. For more information, see [IoT Hub limits](iot-hub-devguide-quotas-throttling.md). | | Protocol | Available using MQTT or AMQP. | Available using MQTT or AMQP. | Available on all protocols. Device must poll when using HTTPS. |
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
IoT Hub enforces other operational limits:
| Message enrichments | Paid SKU hubs can have up to 10 message enrichments. Free SKU hubs can have up to 2 message enrichments.| | Device-to-cloud messaging | Maximum message size 256 KB | | Cloud-to-device messaging<sup>1</sup> | Maximum message size 64 KB. Maximum pending messages for delivery is 50 per device. |
-| Direct method<sup>1</sup> | Maximum direct method payload size is 128 KB. |
+| Direct method<sup>1</sup> | Maximum direct method payload size is 128 KB for the request and 128 KB for the response. |
| Automatic device and module configurations<sup>1</sup> | 100 configurations per paid SKU hub. 10 configurations per free SKU hub. | | IoT Edge automatic deployments<sup>1</sup> | 50 modules per deployment. 100 deployments (including layered deployments) per paid SKU hub. 10 deployments per free SKU hub. | | Twins<sup>1</sup> | Maximum size of desired properties and reported properties sections are 32 KB each. Maximum size of tags section is 8 KB. Maximum size of each individual property in every section is 4 KB. |
key-vault Private Link Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-diagnostics.md
This logic means that if the Virtual Network is linked to a Private DNS Zone wit
As you can see, the name resolution is under your control. The rationales for this design are: -- You may have a complex scenario that involves custom DNS servers and integration with on-premise networks. In that case, you need to control how names are translated to IP addresses.
+- You may have a complex scenario that involves custom DNS servers and integration with on-premises networks. In that case, you need to control how names are translated to IP addresses.
- You may need to access a key vault without private links. In that case, resolving the hostname from the Virtual Network must return the public IP address, and this happens because key vaults without private links don't have the `privatelink` alias in the name registration. ## 7. Validate that requests to key vault use private link
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/best-practices.md
Managed HSM is a cloud service that safeguards encryption keys. As these keys ar
- Create an [Azure Active Directory Security Group](../../active-directory/fundamentals/active-directory-manage-groups.md) for the HSM Administrators (instead of assigning Administrator role to individuals). This will prevent "administration lock-out" in case of individual account deletion. - Lock down access to your management groups, subscriptions, resource groups and Managed HSMs - Use Azure RBAC to control access to your management groups, subscriptions, and resource groups - Create per key role assignments using [Managed HSM local RBAC](access-control.md#data-plane-and-managed-hsm-local-rbac).-- To maintain separation of duties avoid assigning multiple roles to same principals.
+- To maintain separation of duties avoid assigning multiple roles to same principals.
- Use least privilege access principal to assign roles. - Create custom role definition with precise set of permissions.
Managed HSM is a cloud service that safeguards encryption keys. As these keys ar
## Backup -- Make sure you take regular backups of your HSM. Backups can be done at the HSM level and for specific keys.
+- Make sure you take regular backups of your HSM. Backups can be done at the HSM level and for specific keys.
## Turn on logging
Managed HSM is a cloud service that safeguards encryption keys. As these keys ar
> [!NOTE] > Keys created or imported into Managed HSM are not exportable. -- To ensure long term portability and key durability, generate keys in your on-premise HSM and [import them to Managed HSM](hsm-protected-keys-byok.md). You will have a copy of your key securely stored in your on-premise HSM for future use.
+- To ensure long term portability and key durability, generate keys in your on-premise HSM and [import them to Managed HSM](hsm-protected-keys-byok.md). You will have a copy of your key securely stored in your on-premises HSM for future use.
## Next steps
key-vault Key Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/key-management.md
Use `az keyvault key create` command to create a key.
### Create an RSA key
-The example below shows how to create a 3072-bit **RSA** key that will be only used for **wrapKey, unwrapKey** operations (--ops).
+The example below shows how to create a 3072-bit **RSA** key that will be only used for **wrapKey, unwrapKey** operations (--ops).
```azurecli-interactive
az keyvault key import --hsm-name ContosoHSM --name myrsakey --pem-file mycert.k
az keyvault key recover --id https://ContosoMHSM.managedhsm.azure.net/deletedKeys/myrsakey --pem-file mycert.key --password 'mypassword' ```
-To import a key from your on-premise HSM to managed HSM, see [Import HSM-protected keys to Managed HSM (BYOK)](hsm-protected-keys-byok.md)
+To import a key from your on-premises HSM to managed HSM, see [Import HSM-protected keys to Managed HSM (BYOK)](hsm-protected-keys-byok.md)
## Next steps
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
# What is Azure Key Vault Managed HSM?
-Azure Key Vault Managed HSM (Hardware Security Module) is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs.
+Azure Key Vault Managed HSM (Hardware Security Module) is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs.
For pricing information, please see Managed HSM Pools section on [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). For supported key types, see [About keys](../keys/about-keys.md).
-> [!NOTE]
+> [!NOTE]
> The term "Managed HSM instance" is synonymous with "Managed HSM pool". To avoid confusion, we use "Managed HSM instance" throughout these articles. ## Why use Managed HSM? ### Fully managed, highly available, single-tenant HSM as a service -- **Fully managed**: HSM provisioning, configuration, patching, and maintenance is handled by the service.
+- **Fully managed**: HSM provisioning, configuration, patching, and maintenance is handled by the service.
- **Highly available and zone resilient** (where Availability zones are supported): Each HSM cluster consists of multiple HSM partitions that span across at least two availability zones. If the hardware fails, member partitions for your HSM cluster will be automatically migrated to healthy nodes. - **Single-tenant**: Each Managed HSM instance is dedicated to a single customer and consists of a cluster of multiple HSM partitions. Each HSM cluster uses a separate customer-specific security domain that cryptographically isolates each customer's HSM cluster.
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
- **Monitor and audit**: fully integrated with Azure monitor. Get complete logs of all activity via Azure Monitor. Use Azure Log Analytics for analytics and alerts. - **Data residency**: Managed HSM doesn't store/process customer data outside the region the customer deploys the HSM instance in.
-### Integrated with Azure and Microsoft PaaS/SaaS services
+### Integrated with Azure and Microsoft PaaS/SaaS services
- Generate (or import using [BYOK](hsm-protected-keys-byok.md)) keys and use them to encrypt your data at rest in Azure services such as [Azure Storage](../../storage/common/customer-managed-keys-overview.md), [Azure SQL](/azure/azure-sql/database/transparent-data-encryption-byok-overview), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and [Customer Key for Microsoft 365](/microsoft-365/compliance/customer-key-set-up). For a more complete list of Azure services which work with Managed HSM, see [Data Encryption Models](../../security/fundamentals/encryption-models.md#supporting-services).
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
- Easily migrate your existing applications that use a vault (a multi-tenant) to use Managed HSMs. - Use same application development and deployment patterns for all your applications irrespective of key management solution in use: multi-tenant vaults or single-tenant Managed HSMs.
-### Import keys from your on-premise HSMs
+### Import keys from your on-premises HSMs
- Generate HSM-protected keys in your on-premise HSM and import them securely into Managed HSM.
lab-services Class Type Matlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-matlab.md
[MATLAB](https://www.mathworks.com/products/matlab.html) is a programming platform from [MathWorks](https://www.mathworks.com/), which combines computational power and visualization. MATLAB is a popular tool for mathematics, engineering, physics, and chemistry.
-If you're using a [campus-wide license](https://www.mathworks.com/academia/tah-support-program/administrators.html), see directions at [download MATLAB installation files](https://www.mathworks.com/matlabcentral/answers/259632-how-can-i-get-matlab-installation-files-for-use-on-an-offline-machine) to download the MATLAB installer files on the template machine.
+If you're using a [campus-wide license](https://www.mathworks.com/academia/tah-support-program/administrators.html), see directions at [download MATLAB installation files](https://www.mathworks.com/matlabcentral/answers/259632-how-can-i-get-matlab-installation-files-for-use-on-an-offline-machine) to download the MATLAB installer files on the template machine.
In this article, we'll show you how to set up a class that uses MATLAB client software with a license server.
Before creating the lab plan, you'll need to set up the server to run the [Netwo
For detailed instructions on how to install a licensing server, see [Install Network License Manager with Internet Connection](https://www.mathworks.com/help/install/ug/install-network-license-manager-with-internet-connection.html). To enable borrowing, see [Borrow License](https://www.mathworks.com/help/install/license/borrow-licenses.html).
-Assuming the license server is located in an on-premise network or a private network within Azure, youΓÇÖll need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) when creating your [lab plan](./tutorial-setup-lab-plan.md).
+Assuming the license server is located in an on-premises network or a private network within Azure, youΓÇÖll need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) when creating your [lab plan](./tutorial-setup-lab-plan.md).
> [!IMPORTANT] > [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can't be added later.
You must be a license administrator to get the installation files, license file,
1. Under the **My Software** section of the account page, select the license attached to the Network License Manager setup for the lab. 1. On the license detail page, select **Download Products**. 1. Wait for the installer to self-extract.
-1. Start the installer.
+1. Start the installer.
1. On the **Sign in to your MathWorks Account** page, enter your MathWorks account details. 1. On the **MathWorks License Agreement** page, accept the terms and select the **Next** button. 1. Select the **Advanced Options** drop-down and choose the **I want to download without installing** option. 1. On the **Select destination folder**, select **Next**. 1. Select **Windows** as the computer platform to install MATLAB. 1. On the **Select product** page, ensure that MATLAB is selected along with any other MathWorks products you want to install.
-1. On the **Confirm Selections and Download** page, select **Begin Download**.
+1. On the **Confirm Selections and Download** page, select **Begin Download**.
1. Wait for the selected products to download, and then select **Finish**. You can also download an ISO image from the MathWorks website.
For a class of 25 students with 20 hours of scheduled class time and 10 hours of
25 students \* (20 scheduled hours + 10 quota hours) \* 55 lab units \* 0.01 USD per hour = 412.50 USD >[!IMPORTANT]
-> Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
+> Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
## Next steps
lab-services Class Type Solidworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-solidworks.md
Title: Set up a SOLIDWORKS lab for engineering with Azure Lab Services | Microsoft Docs
-description: Learn how to set up a lab for engineering courses using SOLIDWORKS.
+description: Learn how to set up a lab for engineering courses using SOLIDWORKS.
Last updated 01/05/2022
In this article, weΓÇÖll show how to set up a class that uses SOLIDWORKS 2019 an
## License server
-SOLIDWORKS Network Licensing requires that you have SolidNetWork License Manager installed and activated on your license server. This license server is typically located in either your on-premise network or a private network within Azure. For more information on how to set up SolidNetWork License Manager on your server, see [Installing and Activating a License Manager](https://help.solidworks.com/2019/English/Installation/install_guide/t_installing_snl_lic_mgr.htm) in the SOLIDWORKS install guide. Remember the **port number** and [**serial number**](https://help.solidworks.com/2019/english/installation/install_guide/r_hid_state_serial_number.htm) that are used since they'll be needed in later steps.
+SOLIDWORKS Network Licensing requires that you have SolidNetWork License Manager installed and activated on your license server. This license server is typically located in either your on-premises network or a private network within Azure. For more information on how to set up SolidNetWork License Manager on your server, see [Installing and Activating a License Manager](https://help.solidworks.com/2019/English/Installation/install_guide/t_installing_snl_lic_mgr.htm) in the SOLIDWORKS install guide. Remember the **port number** and [**serial number**](https://help.solidworks.com/2019/english/installation/install_guide/r_hid_state_serial_number.htm) that are used since they'll be needed in later steps.
After your license server is set up, youΓÇÖll need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) in your [lab plan](./tutorial-setup-lab-plan.md)
After your license server is set up, youΓÇÖll need to [Connect to your virtual n
> [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can't be added later. > [!NOTE]
-> You should verify that the appropriate ports are opened on your firewalls to allow communication between the lab virtual machines and the license server.
+> You should verify that the appropriate ports are opened on your firewalls to allow communication between the lab virtual machines and the license server.
See the instructions on [Modifying License Manager Computer Ports for Windows Firewall](http://help.solidworks.com/2019/english/installation/install_guide/t_mod_ports_on_lic_mgr_for_firewall.htm) that show how to add inbound and outbound rules to the license server's firewall. You may also need to open up ports to the lab virtual machines. Follow more information on firewall settings and finding the lab's public IP, see [firewall settings for labs](./how-to-configure-firewall-settings.md).
For instructions on how to create a lab, see [Tutorial: Set up a lab](tutorial-s
| Lab settings | Value/instructions | | | |
-| Virtual Machine Size | **Small GPU (Visualization)**. This VM is best suited for remote visualization, streaming, gaming, encoding using frameworks such as OpenGL and DirectX.|
+| Virtual Machine Size | **Small GPU (Visualization)**. This VM is best suited for remote visualization, streaming, gaming, encoding using frameworks such as OpenGL and DirectX.|
| Virtual Machine Image | Windows 10 Pro | > [!NOTE]
The steps in this section show how to set up your template virtual machine by do
1. Download the installation files for SOLIDWORKS client software. You have two options for downloading: - Download from [SOLIDWORKS customer portal](https://login.solidworks.com/nidp/idff/sso?id=cpenglish&sid=1&option=credential&sid=1&target=https%3A%2F%2Fcustomerportal.solidworks.com%2F). - Download from a directory on a server. If you used this option, you need to ensure that the server is accessible from the template virtual machine. For example, this server may be located in the same virtual network that is peered with your lab account.
-
+ For details, see [Installation on Individual Computers in the SOLIDWORKS](http://help.solidworks.com/2019/english/Installation/install_guide/c_installing_on_individual_computers.htm?id=fc149e8a968a422a89e2a943265758d3#Pg0) in SOLIDWORKS install guide. 1. Once the installation files are downloaded, install the client software using SOLIDWORKS Installation Manager. See details on [Installing a License Client](http://help.solidworks.com/2019/english/installation/install_guide/t_installing_snl_license_client.htm) in SOLIDWORKS install guide.
Let's cover a possible cost estimate for this class. This estimate doesn't inclu
25 students \* (20 scheduled hours + 10 quota hours) \* 160 Lab Units * 0.01 USD per hour = 1200.00 USD >[!IMPORTANT]
-> Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
+> Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
## Next steps
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 06/01/2022 Last updated : 06/13/2022
Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. To create a logic app, you use either the **Logic App (Consumption)** resource type or the **Logic App (Standard)** resource type. The Consumption resource type runs in the *multi-tenant* Azure Logic Apps or *integration service environment*, while the Standard resource type runs in *single-tenant* Azure Logic Apps environment.
-Before you choose which resource type to use, review this article to learn how the resources types and service environments compare to each other. You can then decide which type is best to use, based on your scenario's needs, solution requirements, and the environment where you want to deploy, host, and run your workflows.
+Before you choose which resource type to use, review this article to learn how the resources types and service environments compare to each other. You can then decide the type that's best for your scenario's needs, solution requirements, and the environment where you want to deploy and run your workflows.
If you're new to Azure Logic Apps, review the following documentation:
If you're new to Azure Logic Apps, review the following documentation:
To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
-The following table briefly summarizes differences between the **Logic App (Standard)** resource type and the **Logic App (Consumption)** resource type. You'll also learn how the *single-tenant* environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
+The following table briefly summarizes differences between the **Logic App (Standard)** resource type and the **Logic App (Consumption)** resource type. You also learn how the *single-tenant* environment differs from the *multi-tenant* environment and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
[!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
To learn more about portability, flexibility, and performance improvements, cont
When you create logic apps using the **Logic App (Standard)** resource type, you can deploy and run your workflows in other environments, such as [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md). If you use Visual Studio Code with the **Azure Logic Apps (Standard)** extension, you can *locally* develop, build, and run your workflows in your development environment without having to deploy to Azure. If your scenario requires containers, [create single-tenant based logic apps using Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, review [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md)
-These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. Also, the multi-tenant model for automating **Logic App (Consumption)** resource deployment is completely based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both apps and infrastructure.
+These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. Also, the multi-tenant model for automating **Logic App (Consumption)** resource deployment is based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both apps and infrastructure.
With the **Logic App (Standard)** resource type, deployment becomes easier because you can separate app deployment from infrastructure deployment. You can package the single-tenant Azure Logic Apps runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your infrastructure, you can still use ARM templates to separately provision those resources along with other processes and pipelines that you use for those purposes. To deploy your app, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. That way, you can deploy using your own chosen tools, no matter the technology stack that you use for development.
-By using standard build and deploy options, you can focus on app development separately from infrastructure deployment. As a result, you get a more generic project model where you can apply many similar or the same deployment options that you use for a generic app. You also benefit from a more consistent experience for building deployment pipelines around your app projects and for running the required tests and validations before publishing to production.
+By using standard build and deploy options, you can focus on app development separately from infrastructure deployment. As a result, you get a more generic project model where you can apply many similar or the same deployment options that you use for a generic app. You also benefit from a more consistent experience when you build deployment pipelines for your apps and when you run the required tests and validations before you publish to production.
<a name="performance"></a>
To create a logic app based on the environment that you want, you have multiple
|--||| | Azure portal | **Logic App (Standard)** resource type | [Create integration workflows for single-tenant Logic Apps - Azure portal](create-single-tenant-workflows-azure-portal.md) | | Visual Studio Code | [**Azure Logic Apps (Standard)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurelogicapps) | [Create integration workflows for single-tenant Logic Apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md) |
-| Azure CLI | Logic Apps Azure CLI extension | Not yet available |
+| Azure CLI | Logic Apps Azure CLI extension | [az logicapp](/cli/azure/logicapp) |
+| Azure Resource Manager | - [Local](https://github.com/Azure/logicapps/tree/master/azure-devops-sample#local) <br>- [DevOps](https://github.com/Azure/logicapps/tree/master/azure-devops-sample#devops) | [Single-tenant Azure Logic Apps](https://github.com/Azure/logicapps/tree/master/azure-devops-sample) |
+| Azure Arc-enabled Logic Apps | [Azure Arc-enabled Logic Apps sample](https://github.com/Azure/logicapps/tree/master/arc-enabled-logic-app-sample) | - [What is Azure Arc-enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md) <br><br>- [Create and deploy single-tenant based logic app workflows with Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md) |
|||| **Multi-tenant environment**
To create a logic app based on the environment that you want, you have multiple
Although your development experiences differ based on whether you create **Consumption** or **Standard** logic app resources, you can find and access all your deployed logic apps under your Azure subscription.
-For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Standard** logic app resource types. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but they are grouped by the extension that you used, namely **Azure: Logic Apps (Consumption)** and **Azure: Logic Apps (Standard)**.
+For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Standard** logic app resource types. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but they're grouped by the extension that you used, namely **Azure: Logic Apps (Consumption)** and **Azure: Logic Apps (Standard)**.
<a name="stateful-stateless"></a>
With the **Logic App (Standard)** resource type, you can create these workflow t
* *Stateful*
- Create a stateful workflow when you need to keep, review, or reference data from previous events. These workflows save and transfer all the inputs and outputs for each action and their states to external storage, which makes reviewing the run details and history possible after each run finishes. Stateful workflows provide high resiliency if outages happen. After services and systems are restored, you can reconstruct interrupted runs from the saved state and rerun the workflows to completion. Stateful workflows can continue running for much longer than stateless workflows.
+ Create a stateful workflow when you need to keep, review, or reference data from previous events. These workflows save all the operations' inputs, outputs, and states to external storage. This information makes reviewing the workflow run details and history possible after each run finishes. Stateful workflows provide high resiliency if outages happen. After services and systems are restored, you can reconstruct interrupted runs from the saved state and rerun the workflows to completion. Stateful workflows can continue running for much longer than stateless workflows.
- By default, stateful workflows in both multi-tenant and single-tenant Azure Logic Apps run asynchronously. All HTTP-based actions follow the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply). This pattern specifies that after an HTTP action calls or sends a request to an endpoint, service, system, or API, the receiver immediately returns a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response. This code confirms that the receiver accepted the request but hasn't finished processing. The response can include a `location` header that specifies the URI and a refresh ID that the caller can use to poll or check the status for the asynchronous request until the receiver stops processing and returns a ["200 OK"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) success response or other non-202 response. However, the caller doesn't have to wait for the request to finish processing and can continue to run the next action. For more information, see [Asynchronous microservice integration enforces microservice autonomy](/azure/architecture/microservices/design/interservice-communication#synchronous-versus-asynchronous-messaging).
+ By default, stateful workflows in both multi-tenant and single-tenant Azure Logic Apps run asynchronously. All HTTP-based actions follow the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply). After an HTTP action calls or sends a request to an endpoint, service, system, or API, the request receiver immediately returns a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response. This code confirms that the receiver accepted the request but hasn't finished processing. The response can include a `location` header that specifies the URI and a refresh ID that the caller can use to poll or check the status for the asynchronous request until the receiver stops processing and returns a ["200 OK"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) success response or other non-202 response. However, the caller doesn't have to wait for the request to finish processing and can continue to run the next action. For more information, see [Asynchronous microservice integration enforces microservice autonomy](/azure/architecture/microservices/design/interservice-communication#synchronous-versus-asynchronous-messaging).
* *Stateless*
- Create a stateless workflow when you don't need to keep, review, or reference data from previous events in external storage after each run finishes for later review. These workflows save all the inputs and outputs for each action and their states *in memory only*, not in external storage. As a result, stateless workflows have shorter runs that are typically less than 5 minutes, faster performance with quicker response times, higher throughput, and reduced running costs because the run details and history aren't saved in external storage. However, if outages happen, interrupted runs aren't automatically restored, so the caller needs to manually resubmit interrupted runs.
+ Create a stateless workflow when you don't need to keep, review, or reference data from previous events in external storage after each run finishes for later review. These workflows save all the inputs and outputs for each action and their states *in memory only*, not in external storage. As a result, stateless workflows have shorter runs that usually finish in 5 minutes or less, faster performance with quicker response times, higher throughput, and reduced running costs because external storage doesn't save the workflow run details and history. However, if outages happen, interrupted runs aren't automatically restored, so the caller needs to manually resubmit interrupted runs.
- A stateless workflow provides the best performance when handling data or content, such as a file, that doesn't exceed 64 KB in *total* size. Larger content sizes, such as multiple large attachments, might significantly slow your workflow's performance or even cause your workflow to crash due to out-of-memory exceptions. If your workflow might have to handle larger content sizes, use a stateful workflow instead.
+ A stateless workflow provides the best performance when handling data or content that doesn't exceed 64 KB in *total* size, such as a file. Larger content sizes, such as multiple large attachments, might significantly slow your workflow's performance or even cause your workflow to crash due to out-of-memory exceptions. If your workflow might have to handle larger content sizes, use a stateful workflow instead.
In stateless workflows, [*managed connector actions*](../connectors/managed.md) are available, but *managed connector triggers* are unavailable. So, to start your workflow, select a [built-in trigger](../connectors/built-in.md) instead, such as the Request, Event Hubs, or Service Bus trigger. These triggers run natively on the Azure Logic Apps runtime. The Recurrence trigger is unavailable for stateless workflows and is available only for stateful workflows. For more information about limited, unavailable, or unsupported triggers, actions, and connectors, see [Changed, limited, unavailable, or unsupported capabilities](#limited-unavailable-unsupported).
With the **Logic App (Standard)** resource type, you can create these workflow t
| Supports chunking | No support for chunking | | Supports asynchronous operations | No support for asynchronous operations | | Edit default max run duration in host configuration | Best for workflows with max duration under 5 minutes |
-| Handles large messages | Best for handling small message sizes (under 64K) |
+| Handles large messages | Best for handling small message sizes (under 64 KB) |
||| </center>
The single-tenant model and **Logic App (Standard)** resource type include many
> For the built-in SQL Server version, only the **Execute Query** action can directly connect to Azure > virtual networks without using the [on-premises data gateway](logic-apps-gateway-connection.md).
- * You can create your own built-in connectors for any service that you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis), which aren't currently supported, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime.
+ * You can create your own built-in connectors for any service that you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similar to built-in connectors such as Azure Service Bus and SQL Server, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime. However, custom built-in connectors aren't similar to [custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis), which aren't currently supported.
The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, [switch your project from extension bundle-based (Node.js) to NuGet package-based (.NET)](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring). For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
The single-tenant model and **Logic App (Standard)** resource type include many
## Changed, limited, unavailable, or unsupported capabilities
-For the **Logic App (Standard)** resource, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
+For the **Logic App (Standard)** resource, these capabilities have changed, or they're currently limited, unavailable, or unsupported:
* **Triggers and actions**: [Built-in triggers and actions](../connectors/built-in.md) run natively in Azure Logic Apps, while managed connectors are hosted and run in Azure. For Standard workflows, some built-in triggers and actions are currently unavailable, such as Sliding Window, Batch, Azure App Service, and Azure API Management. To start a stateful or stateless workflow, use a built-in trigger such as the Request, Event Hubs, or Service Bus trigger. The Recurrence trigger is available for stateful workflows, but not stateless workflows. In the designer, built-in triggers and actions appear on the **Built-in** tab, while [managed connector triggers and actions](../connectors/managed.md) appear on the **Azure** tab.
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
For more information, see [Secure online endpoints](how-to-secure-online-endpoin
## Managed online endpoints vs Kubernetes online endpoints
-There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment.
+There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**.
+
+Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment.
+
+Kubernetes online endpoint allows you to deploy models and serve online endpoints at your fully configured and managed [Kubernetes cluster anywhere](./how-to-attach-kubernetes-anywhere.md),with CPUs or GPUs.
The following table highlights the key differences between managed online endpoints and Kubernetes online endpoints.
The following table highlights the key differences between managed online endpoi
| **Recommended users** | Users who want a managed model deployment and enhanced MLOps experience | Users who prefer Kubernetes and can self-manage infrastructure requirements | | **Infrastructure management** | Managed compute provisioning, scaling, host OS image updates, and security hardening | User responsibility | | **Compute type** | Managed (AmlCompute) | Kubernetes cluster (Kubernetes) |
-| **Out-of-box monitoring** | [Azure Monitoring](how-to-monitor-online-endpoints.md) <br> (includes key metrics like latency and throughput) | Unsupported |
-| **Out-of-box logging** | [Azure Logs and Log Analytics at endpoint level](how-to-deploy-managed-online-endpoints.md#optional-integrate-with-log-analytics) | Supported |
+| **Out-of-box monitoring** | [Azure Monitoring](how-to-monitor-online-endpoints.md) <br> (includes key metrics like latency and throughput) | Supported |
+| **Out-of-box logging** | [Azure Logs and Log Analytics at endpoint level](how-to-deploy-managed-online-endpoints.md#optional-integrate-with-log-analytics) | Unsupported |
| **Application Insights** | Supported | Supported | | **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported | | **Virtual Network (VNET)** | [Supported](how-to-secure-online-endpoint.md) (preview) | Supported |
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
Title: Configure Kubernetes cluster (Preview)
+ Title: Configure Kubernetes cluster
description: Configure and attach an existing Kubernetes in any infrastructure across on-premises and multi-cloud to build, train, and deploy models with seamless Azure ML experience.
-# Configure Kubernetes cluster for Azure Machine Learning (Preview)
+# Configure Kubernetes cluster for Azure Machine Learning
-Using Kubernetes with Azure Machine Learning enables you to build, train, and deploy models in any infrastructure on-premises and across multi-cloud. With an AzureML extension deployment on Kubernetes, you can instantly onboard teams of ML professionals with AzureML service capabilities. These services include full machine learning lifecycle and automation with MLOps in hybrid cloud and multi-cloud.
+Azure Machine Learning Kubernetes compute enables you to run training jobs such as AutoML, pipeline, and distributed jobs, or to deploy models as online endpoint or batch endpoint. Azure ML Kubernetes compute supports two kinds of Kubernetes cluster:
+* **[Azure Kubernetes Services](https://azure.microsoft.com/services/kubernetes-service/)** (AKS) cluster in Azure. With your own managed AKS cluster in Azure, you can gain security and controls to meet compliance requirement as well as flexibility to manage teams' ML workload.
+* **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** (Arc Kubernetes) cluster. With Arc Kubernetes cluster, you can train or deploy models in any infrastructure on-premises, across multi-cloud, or the edge.
-You can easily bring AzureML capabilities to your Kubernetes cluster from cloud or on-premises by deploying AzureML extension.
-- For Azure Kubernetes Service (AKS) in Azure, deploy AzureML extension to the AKS directly. For more information, see [Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)](../aks/cluster-extensions.md).-- For Kubernetes clusters on-premises or from other cloud providers, connect the cluster with Azure Arc first, then deploy AzureML extension to Azure Arc-enabled Kubernetes. For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md).-
-In this article, you can learn about steps to configure and attach an existing Kubernetes cluster anywhere for Azure Machine Learning:
-* [Deploy AzureML extension to Kubernetes cluster](#deploy-azureml-extension)
-* [Attach a Kubernetes cluster to AzureML workspace](#attach-a-kubernetes-cluster-to-an-azureml-workspace)
-
-## Why use Azure Machine Learning Kubernetes?
-
-AzureML Kubernetes is customer fully configured and managed compute for machine learning. It can be used as both [training compute target](./concept-compute-target.md#train) and [inference compute target](./concept-compute-target.md#deploy). It provides the following benefits:
--- Harness existing heterogeneous or homogeneous Kubernetes cluster, with CPUs or GPUs.-- Share the same Kubernetes cluster in multiple AzureML Workspaces across region.-- Use the same Kubernetes cluster for different machine learning purposes, including model training, batch scoring, and real-time inference.-- Secure network communication between the cluster and cloud via Azure Private Link and Private Endpoint.-- Isolate team projects and machine learning workloads with Kubernetes node selector and namespace.-- [Target certain types of compute nodes and CPU/Memory/GPU resource allocation for training and inference workloads](./reference-kubernetes.md#create-and-use-instance-types-for-efficient-compute-resource-usage). -- [Connect with custom data sources for machine learning workloads using Kubernetes PV and PVC ](./reference-kubernetes.md#azureml-jobs-connect-with-on-premises-data-storage).
+In this article, you can learn about steps to configure an existing Kubernetes cluster for Azure Machine Learning:
+* [Deploy AzureML extension to Kubernetes cluster](#deploy-azureml-extension-to-kubernetes-cluster)
+* [Attach Kubernetes cluster to Azure ML workspace](#attach-a-kubernetes-cluster-to-an-azure-ml-workspace)
+* [Create instance types for efficient compute resource utilization](#create-and-use-instance-types-for-efficient-compute-resource-utilization)
## Prerequisites
-* A running Kubernetes cluster in [supported version and region](./reference-kubernetes.md#supported-kubernetes-version-and-region). **We recommend your cluster has a minimum of 4 vCPU cores and 8GB memory, around 2 vCPU cores and 3GB memory will be used by Azure Arc and AzureML extension components**.
-* Other than Azure Kubernetes Services (AKS) cluster in Azure, connect your Kubernetes cluster to Azure Arc. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md).
-
- * If you have an AKS cluster in Azure, **Azure Arc connection is not required and not recommended**.
-
- * If you have Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, follow another prerequisite step [here](./reference-kubernetes.md#prerequisites-for-aro-or-ocp-clusters) before AzureML extension deployment.
-* Cluster running behind an outbound proxy server or firewall needs additional network configurations. Fulfill the [network requirements](./how-to-access-azureml-behind-firewall.md#kubernetes-compute)
-* Install or upgrade Azure CLI to version >=2.16.0
-* Install the Azure CLI extension ```k8s-extension``` (version>=1.2.3) by running ```az extension add --name k8s-extension```
+* An AKS cluster is up and running in Azure.
+* or an Arc Kubernetes cluster is up and running. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md).
+ * if this is Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, please ensure to satisfy additional prerequisite step [here](./reference-kubernetes.md#prerequisites-for-aro-or-ocp-clusters).
+* The Kubernetes cluster must have minimum of 4 vCPU cores and 8GB memory.
+* Cluster running behind an outbound proxy server or firewall needs additional [network configurations](./how-to-access-azureml-behind-firewall.md#kubernetes-compute)
+* Install or upgrade Azure CLI to version 2.24.0 or higher.
+* Install or upgrade Azure CLI extension ```k8s-extension``` to version 1.2.3 or higher.
-## What is AzureML extension
+## Deploy AzureML extension to Kubernetes cluster
-AzureML extension consists of a set of system components deployed to your Kubernetes cluster in `azureml` namespace, so you can enable your cluster to run an AzureML workload - model training jobs or model endpoints. You can use an Azure CLI command ```k8s-extension create``` to deploy AzureML extension. General available (GA) version of AzureML extension >= 1.1.1
+AzureML extension consists of a set of system components deployed to your Kubernetes cluster in `azureml` namespace, so you can enable your cluster to run an Azure ML workload - model training jobs or model endpoints. You can use an Azure CLI command ```k8s-extension create``` to deploy AzureML extension. General available (GA) version of AzureML extension is version 1.1.1 or higher.
For a detailed list of AzureML extension system components, see [AzureML extension components](./reference-kubernetes.md#azureml-extension-components).
-## Key considerations for AzureML extension deployment
-
-AzureML extension allows you to specify configuration settings needed for different workload support at deployment time. Before AzureML extension deployment, **read following carefully to avoid unnecessary extension deployment errors**:
-
- * Type of workload to enable for your cluster. ```enableTraining``` and ```enableInference``` config settings are your convenient choices here; `enableTraining` will enable **training** and **batch scoring** workload, `enableInference` will enable **real-time inference** workload.
- * For inference workload support, it requires ```azureml-fe``` router service to be deployed for routing incoming inference requests to model pod, and you would need to specify ```inferenceRouterServiceType``` config setting for ```azureml-fe```. ```azureml-fe``` can be deployed with one of following ```inferenceRouterServiceType```:
- * Type ```LoadBalancer```. Exposes ```azureml-fe``` externally using a cloud provider's load balancer. To specify this value, ensure that your cluster supports load balancer provisioning. Note most on-premises Kubernetes clusters might not support external load balancer.
- * Type ```NodePort```. Exposes ```azureml-fe``` on each Node's IP at a static port. You'll be able to contact ```azureml-fe```, from outside of cluster, by requesting ```<NodeIP>:<NodePort>```. Using ```NodePort``` also allows you to set up your own load balancing solution and SSL termination for ```azureml-fe```.
- * Type ```ClusterIP```. Exposes ```azureml-fe``` on a cluster-internal IP, and it makes ```azureml-fe``` only reachable from within the cluster. For ```azureml-fe``` to serve inference requests coming outside of cluster, it requires you to set up your own load balancing solution and SSL termination for ```azureml-fe```.
- * For inference workload support, to ensure high availability of ```azureml-fe``` routing service, AzureML extension deployment by default creates 3 replicas of ```azureml-fe``` for clusters having 3 nodes or more. If your cluster has **less than 3 nodes**, set ```inferenceLoadbalancerHA=False```.
- * For inference workload support, you would also want to consider using **HTTPS** to restrict access to model endpoints and secure the data that clients submit. For this purpose, you would need to specify either ```sslSecret``` config setting or combination of ```sslKeyPemFile``` and ```sslCertPemFile``` config settings. By default, AzureML extension deployment expects **HTTPS** support required, and you would need to provide above config setting. For development or test purposes, **HTTP** support is conveniently supported through config setting ```allowInsecureConnections=True```.
-
-For a complete list of configuration settings available to choose at AzureML deployment time, see [Review AzureML extension config settings](#review-azureml-extension-configuration-settings)
-
-## Deploy AzureML extension
### [CLI](#tab/deploy-extension-with-cli) To deploy AzureML extension with CLI, use `az k8s-extension create` command passing in values for the mandatory parameters. We list 4 typical extension deployment scenarios for reference. To deploy extension for your production usage, please carefully read the complete list of [configuration settings](#review-azureml-extension-configuration-settings). -- **Use AKS in Azure for a quick Proof of Concept, both training and inference workloads support**
+- **Use AKS cluster in Azure for a quick proof of concept, to run training jobs or to deploy models as online/batch endpoints**
- Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on AKS, make sure to specify ```managedClusters``` value for ```--cluster-type``` parameter. Run the following Azure CLI command to deploy AzureML extension:
+ For AzureML extension deployment on AKS cluster, make sure to specify ```managedClusters``` value for ```--cluster-type``` parameter. Run the following Azure CLI command to deploy AzureML extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster ``` -- **Use Kubernetes at your lab for a quick Proof of Concept, training workload support only**
+- **Use Arc Kubernetes cluster outside of Azure for a quick proof of concept, to run training jobs only**
- Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on Azure Arc connected cluster, you would need to specify ```connectedClusters``` value for ```--cluster-type``` parameter. Run following simple Azure CLI command to deploy AzureML extension:
+ For AzureML extension deployment on [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster, you would need to specify ```connectedClusters``` value for ```--cluster-type``` parameter. Run the following Azure CLI command to deploy AzureML extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster ``` - **Enable an AKS cluster in Azure for production training and inference workload**
- Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on AKS, make sure to specify ```managedClusters``` value for ```--cluster-type``` parameter. Assuming your cluster has more than 3 nodes, and you will use an Azure public load balancer and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
+ For AzureML extension deployment on AKS, make sure to specify ```managedClusters``` value for ```--cluster-type``` parameter. Assuming your cluster has more than 3 nodes, and you will use an Azure public load balancer and HTTPS for inference workload support. Run the following Azure CLI command to deploy AzureML extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer sslCname=<ssl cname> --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster ```-- **Enable an Azure Arc connected cluster anywhere for production training and inference workload using NVIDIA GPUs**
+- **Enable an [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster anywhere for production training and inference workload using NVIDIA GPUs**
- Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on Azure Arc connected cluster, make sure to specify ```connectedClusters``` value for ```--cluster-type``` parameter. Assuming your cluster has more than 3 nodes, you will use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
+ For AzureML extension deployment on [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster, make sure to specify ```connectedClusters``` value for ```--cluster-type``` parameter. Assuming your cluster has more than 3 nodes, you will use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=NodePort sslCname=<ssl cname> installNvidiaDevicePlugin=True installDcgmExporter=True --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster ``` ### [Azure portal](#tab/portal)
-The UI experience to deploy extension is only available for **Azure Arc-enabled Kubernetes**. If you have an AKS cluster without Azure Arc connected, you need to use CLI to deploy AzureML extension.
+The UI experience to deploy extension is only available for **[Arc Kubernetes](../azure-arc/kubernetes/overview.md)**. If you have an AKS cluster without Azure Arc connection, you need to use CLI to deploy AzureML extension.
1. In the [Azure portal](https://ms.portal.azure.com/#home), navigate to **Kubernetes - Azure Arc** and select your cluster. 1. Select **Extensions** (under **Settings**), and then select **+ Add**.
The UI experience to deploy extension is only available for **Azure Arc-enabled
:::image type="content" source="media/how-to-attach-arc-kubernetes/deploy-extension-from-ui-extension-list.png" alt-text="Screenshot of selecting AzureML extension from Azure portal.":::
-1. Follow the prompts to deploy the extension. You can customize the installation by configuring the installtion in the tab of **Basics**, **Configurations** and **Advanced**. For a detailed list of AzureML extension configuration settings, see [AzureML extension configuration settings](#review-azureml-extension-configuration-settings).
+1. Follow the prompts to deploy the extension. You can customize the installation by configuring the installation in the tab of **Basics**, **Configurations** and **Advanced**. For a detailed list of AzureML extension configuration settings, see [AzureML extension configuration settings](#review-azureml-extension-configuration-settings).
:::image type="content" source="media/how-to-attach-arc-kubernetes/deploy-extension-from-ui-settings.png" alt-text="Screenshot of configuring AzureML extension settings from Azure portal."::: 1. On the **Review + create** tab, select **Create**.
The UI experience to deploy extension is only available for **Azure Arc-enabled
:::image type="content" source="media/how-to-attach-arc-kubernetes/deploy-extension-from-ui-extension-detail.png" alt-text="Screenshot of installed AzureML extensions listing in Azure portal.":::
+### Key considerations for AzureML extension deployment
+
+AzureML extension deployment allows you to specify configuration settings needed for different workload support. Before AzureML extension deployment, **please read following carefully to avoid unnecessary extension deployment errors**:
+
+ * Type of workload to enable for your cluster. ```enableTraining``` and ```enableInference``` config settings are your convenient choices here; `enableTraining` will enable **training** and **batch scoring** workload, `enableInference` will enable **real-time inference** workload.
+ * For real-time inference support, it requires ```azureml-fe``` router service to be deployed for routing incoming inference requests to model pod, and you would need to specify ```inferenceRouterServiceType``` config setting for ```azureml-fe```. ```azureml-fe``` can be deployed with one of following ```inferenceRouterServiceType```:
+ * Type ```LoadBalancer```. Exposes ```azureml-fe``` externally using a cloud provider's load balancer. To specify this value, ensure that your cluster supports load balancer provisioning. Note most on-premises Kubernetes clusters might not support external load balancer.
+ * Type ```NodePort```. Exposes ```azureml-fe``` on each Node's IP at a static port. You'll be able to contact ```azureml-fe```, from outside of cluster, by requesting ```<NodeIP>:<NodePort>```. Using ```NodePort``` also allows you to set up your own load balancing solution and SSL termination for ```azureml-fe```.
+ * Type ```ClusterIP```. Exposes ```azureml-fe``` on a cluster-internal IP, and it makes ```azureml-fe``` only reachable from within the cluster. For ```azureml-fe``` to serve inference requests coming outside of cluster, it requires you to set up your own load balancing solution and SSL termination for ```azureml-fe```.
+ * For real-time inference support, to ensure high availability of ```azureml-fe``` routing service, AzureML extension deployment by default creates 3 replicas of ```azureml-fe``` for clusters having 3 nodes or more. If your cluster has **less than 3 nodes**, set ```inferenceLoadbalancerHA=False```.
+ * For real-time inference support, you would also want to consider using **HTTPS** to restrict access to model endpoints and secure the data that clients submit. For this purpose, you would need to specify either ```sslSecret``` config setting or combination of ```sslKeyPemFile``` and ```sslCertPemFile``` config settings. By default, AzureML extension deployment expects **HTTPS** support required, and you would need to provide above config setting. For development or testing purposes, **HTTP** support is conveniently provided through config setting ```allowInsecureConnections=True```.
+
+For a complete list of configuration settings available to choose at AzureML deployment time, see [Review AzureML extension config settings](#review-azureml-extension-configuration-settings)
+ ### Verify AzureML extension deployment 1. Run the following CLI command to check AzureML extension details:
Update, list, show and delete an AzureML extension.
-## Review AzureML extension configuration settings
+### Review AzureML extension configuration settings
For AzureML extension deployment configurations, use ```--config``` or ```--config-protected``` to specify list of ```key=value``` pairs. Following is the list of configuration settings available to be used for different AzureML extension deployment scenario ns. |Configuration Setting Key Name |Description |Training |Inference |Training and Inference |--|--|--|--|--|
- |```enableTraining``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning model training support. | **&check;**| N/A | **&check;** |
+ |```enableTraining``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning model training and batch scoring support. | **&check;**| N/A | **&check;** |
| ```enableInference``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning inference support. |N/A| **&check;** | **&check;** |
- | ```allowInsecureConnections``` |```True``` or ```False```, default `False`. **Must** be set to ```True``` to use inference HTTP endpoints for development or test purposes. |N/A| Optional | Optional |
+ | ```allowInsecureConnections``` |```True``` or ```False```, default `False`. **Can** be set to ```True``` to use inference HTTP endpoints for development or test purposes. |N/A| Optional | Optional |
| ```inferenceRouterServiceType``` |```loadBalancer```, ```nodePort``` or ```clusterIP```. **Required** if ```enableInference=True```. | N/A| **&check;** | **&check;** | | ```internalLoadBalancerProvider``` | This config is only applicable for Azure Kubernetes Service(AKS) cluster now. Set to ```azure``` to allow the inference router using internal load balancer. | N/A| Optional | Optional | |```sslSecret```| The name of Kubernetes secret in `azureml` namespace to store `cert.pem` (PEM-encoded SSL cert) and `key.pem` (PEM-encoded SSL key), required for inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. You can find a sample YAML definition of sslSecret [here](./reference-kubernetes.md#sample-yaml-definition-of-kubernetes-secret-for-tlsssl). Use this config or combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional | |```sslCname``` |A SSL CName used by inference HTTPS endpoint. **Required** if ```allowInsecureConnections=True``` | N/A | Optional | Optional|
- | ```inferenceRouterHA``` |```True``` or ```False```, default ```True```. By default, AzureML extension will deploy 3 ingress controller replicas for high availability, which requires at least 3 workers in a cluster. Set to ```False``` if your cluster has fewer than 3 workers, in this case only one ingress controller is deployed. | N/A| Optional | Optional |
+ | ```inferenceRouterHA``` |```True``` or ```False```, default ```True```. By default, AzureML extension will deploy 3 inference router replicas for high availability, which requires at least 3 worker nodes in a cluster. Set to ```False``` if your cluster has fewer than 3 worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional |
|```nodeSelector``` | By default, the deployed kubernetes resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional | |```installNvidiaDevicePlugin``` | ```True``` or ```False```, default ```False```. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, AzureML extension deployment will not install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to ```True```, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional | |```installPromOp```|```True``` or ```False```, default ```True```. AzureML extension needs prometheus operator to manage prometheus. Set to ```False``` to reuse existing prometheus operator. Compatible [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md) helm chart versions are from 9.3.4 to 30.0.1.| Optional| Optional | Optional |
For AzureML extension deployment configurations, use ```--config``` or ```--conf
|Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference |--|--|--|--|--|
- | ```sslCertPemFile```, ```sslKeyPemFile``` |Path to SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. | N/A| Optional | Optional |
+ | ```sslCertPemFile```, ```sslKeyPemFile``` |Path to SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. **Note** PEM file with pass phrase protected is not supported | N/A| Optional | Optional |
-## Attach a Kubernetes cluster to an AzureML workspace
+## Attach a Kubernetes cluster to an Azure ML workspace
-Attach an AKS or Arc-enabled Kubernetes cluster with AzureML extension installed to AzureML workspace. The same cluster can be attached and shared by multiple AzureMl Workspaces across region.
+Once AzureML extension is deployed on AKS or Arc Kubernetes cluster, you can attach the Kubernetes cluster to Azure ML workspace and create compute targets for ML professionals to use. Each attach operation creates a compute target in Azure ML workspace, and multiple attach operations on the same cluster will create multiple compute targets in a single Azure ML workspace or multiple Azure ML workspace.
### Prerequisite
Azure Relay resource is created during the extension deployment under the same R
The following commands show how to attach an AKS and Azure Arc-enabled Kubernetes cluster, and use it as a compute target with managed identity enabled.
-**AKS**
+**AKS cluster**
```azurecli az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --type Kubernetes --name k8s-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>" --identity-type SystemAssigned --namespace <Kubernetes namespace to run AzureML workloads> --no-wait ```
-**Azure Arc enabled Kubernetes**
+**Arc Kubernetes cluster**
```azurecli az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --type Kubernetes --name amlarc-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/connectedClusters/<cluster-name>" --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
except ComputeTargetException:
``` ### [Studio](#tab/studio)
-Attaching an Azure Arc-enabled Kubernetes cluster makes it available to your workspace for training.
+Attaching a Kubernetes cluster makes it available to your workspace for training or inferencing.
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com). 1. Under **Manage**, select **Compute**.
Attaching an Azure Arc-enabled Kubernetes cluster makes it available to your wor
:::image type="content" source="media/how-to-attach-arc-kubernetes/attach-kubernetes-cluster.png" alt-text="Screenshot of settings for Kubernetes cluster to make available in your workspace.":::
-1. Enter a compute name and select your Azure Arc-enabled Kubernetes cluster from the dropdown.
+1. Enter a compute name and select your Kubernetes cluster from the dropdown.
* **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster.
Attaching an Azure Arc-enabled Kubernetes cluster makes it available to your wor
+## Create and use instance types for efficient compute resource utilization
+
+### What are instance types?
+
+Instance types are an Azure Machine Learning concept that allows targeting certain types of
+compute nodes for training and inference workloads. For an Azure VM, an example for an
+instance type is `STANDARD_D2_V3`.
+
+In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the AzureML extension. Instance types are represented by two elements in AzureML extension:
+[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
+and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+In short, a `nodeSelector` lets us specify which node a pod should run on. The node must have a
+corresponding label. In the `resources` section, we can set the compute resources (CPU, memory and
+NVIDIA GPU) for the pod.
+
+### Default instance type
+
+By default, a `defaultinstancetype` with following definition is created when you attach Kuberenetes cluster to AzureML workspace:
+- No `nodeSelector` is applied, meaning the pod can get scheduled on any node.
+- The workload's pods are assigned default resources with 0.6 cpu cores, 1536Mi memory and 0 GPU:
+```yaml
+resources:
+ requests:
+ cpu: "0.6"
+ memory: "1536Mi"
+ limits:
+ cpu: "0.6"
+ memory: "1536Mi"
+ nvidia.com/gpu: null
+```
+
+> [!NOTE]
+> - The default instance type purposefully uses little resources. To ensure all ML workloads
+run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types.
+> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK).
+> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section)
+
+### Create custom instance types
+
+To create a new instance type, create a new custom resource for the instance type CRD. For example:
+
+```bash
+kubectl apply -f my_instance_type.yaml
+```
+
+With `my_instance_type.yaml`:
+```yaml
+apiVersion: amlarc.azureml.com/v1alpha1
+kind: InstanceType
+metadata:
+ name: myinstancetypename
+spec:
+ nodeSelector:
+ mylabel: mylabelvalue
+ resources:
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 1
+ memory: "2Gi"
+ requests:
+ cpu: "700m"
+ memory: "1500Mi"
+```
+
+The following steps will create an instance type with the labeled behavior:
+- Pods will be scheduled only on nodes with label `mylabel: mylabelvalue`.
+- Pods will be assigned resource requests of `700m` CPU and `1500Mi` memory.
+- Pods will be assigned resource limits of `1` CPU, `2Gi` memory and `1` NVIDIA GPU.
+
+> [!NOTE]
+> - NVIDIA GPU resources are only specified in the `limits` section as integer values. For more information,
+ see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins).
+> - CPU and memory resources are string values.
+> - CPU can be specified in millicores, for example `100m`, or in full numbers, for example `"1"`
+ is equivalent to `1000m`.
+> - Memory can be specified as a full number + suffix, for example `1024Mi` for 1024 MiB.
+
+It's also possible to create multiple instance types at once:
+
+```bash
+kubectl apply -f my_instance_type_list.yaml
+```
+
+With `my_instance_type_list.yaml`:
+```yaml
+apiVersion: amlarc.azureml.com/v1alpha1
+kind: InstanceTypeList
+items:
+ - metadata:
+ name: cpusmall
+ spec:
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "100Mi"
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 0
+ memory: "1Gi"
+
+ - metadata:
+ name: defaultinstancetype
+ spec:
+ resources:
+ requests:
+ cpu: "1"
+ memory: "1Gi"
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 0
+ memory: "1Gi"
+```
+
+The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition will override the `defaultinstancetype` definition created when Kubernetes cluster was attached to AzureML workspace.
+
+If a training or inference workload is submitted without an instance type, it uses the `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with name `defaultinstancetype`. It will automatically be recognized as the default.
++
+### Select instance type to submit training job
+
+To select an instance type for a training job using CLI (V2), specify its name as part of the
+`resources` properties section in job YAML. For example:
+```yaml
+command: python -c "print('Hello world!')"
+environment:
+ image: library/python:latest
+compute: azureml:<compute_target_name>
+resources:
+ instance_type: <instance_type_name>
+```
+
+In the above example, replace `<compute_target_name>` with the name of your Kubernetes compute
+target and `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system will use `defaultinstancetype` to submit job.
+
+### Select instance type to deploy model
+
+To select an instance type for a model deployment using CLI (V2), specify its name for `instance_type` property in deployment YAML. For example:
+
+```yaml
+name: blue
+app_insights_enabled: true
+endpoint_name: <endpoint name>
+model:
+ path: ./model/sklearn_mnist_model.pkl
+code_configuration:
+ code: ./script/
+ scoring_script: score.py
+instance_type: <instance type name>
+environment:
+ conda_file: file:./model/conda.yml
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
+```
+
+In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system will use `defaultinstancetype` to deploy model.
+
+## Recommended best practices
+
+**Separation of responsibilities between the IT-operations team and data-science team.** Managing your own Kubernetes compute and infrastructure for ML workload requires Kubernetes admin privilege and expertise, and it is best to be done by IT-operations team so data-science team can focus on ML models for organizational efficiency.
+
+**Create and manage instance types for different ML workload scenarios.** Each ML workload uses different amounts of compute resources such as CPU/GPU and memory. Azure ML implements instance type as Kubernetes custom resource definition (CRD) with properties of nodeSelector and resource request/limit. With a carefully curated list of instance types, IT-operations can target ML workload on specific node(s) and manage compute resource utilization efficiently.
+
+**Multiple Azure ML workspaces share the same Kubernetes cluster.** You can attach Kubernetes cluster multiple times to the same Azure ML workspace or different Azure ML workspaces, creating multiple compute targets in one workspace or multiple workspaces. Since many customers organize data science projects around Azure ML workspace, multiple data science projects can now share the same Kubernetes cluster. This significantly reduces ML infrastructure management overheads as well as IT cost saving.
+
+**Team/project workload isolation using Kubernetes namespace.** When you attach Kubernetes cluster to Azure ML workspace, you can specify a Kubernetes namespace for the compute target and all workloads run by the compute target will be placed under the specified namespace.
+ ## Next steps -- [Create and use instance types for efficient compute resource usage](./reference-kubernetes.md#create-and-use-instance-types-for-efficient-compute-resource-usage) - [Train models with CLI v2](how-to-train-cli.md) - [Train models with Python SDK](how-to-set-up-training-targets.md) - [Deploy model with an online endpoint (CLI v2)](./how-to-deploy-managed-online-endpoints.md) - [Use batch endpoint for batch scoring (CLI v2)](./how-to-use-batch-endpoint.md)
+### Additional resources
+
+- [Kubernetes version and region availability](./reference-kubernetes.md#supported-kubernetes-version-and-region)
+- [Work with custom data storage](./reference-kubernetes.md#azureml-jobs-connect-with-custom-data-storage)
++ ### Examples All AzureML examples can be found in [https://github.com/Azure/azureml-examples.git](https://github.com/Azure/azureml-examples).
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-databricks-automl-environment.md
Use these settings:
| Setting |Applies to| Value | |-||| | Cluster Name |always| yourclustername |
-| Databricks Runtime Version |always| Runtime 7.3 LTS - Not ML|
+| Databricks Runtime Version |always| 9.1 LTS|
| Python version |always| 3 | | Worker Type <br>(determines max # of concurrent iterations) |Automated ML<br>only| Memory optimized VM preferred | | Workers |always| 2 or higher |
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
Use the [steps above](#portal-create) to attach a compute. Then fill out the fo
* Azure Databricks (for use in machine learning pipelines) * Azure Data Lake Analytics (for use in machine learning pipelines) * Azure HDInsight
- * [Kubernetes](./how-to-attach-kubernetes-anywhere.md#attach-a-kubernetes-cluster-to-an-azureml-workspace)
+ * [Kubernetes](./how-to-attach-kubernetes-anywhere.md#attach-a-kubernetes-cluster-to-an-azure-ml-workspace)
1. Fill out the form and provide values for the required properties.
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
Access to a given Azure Machine Learning workspace via Private Link is done by c
- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.azureml.ms``` - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.azureml.ms``` - ```<compute instance name>.<region the workspace was created in>.instances.azureml.ms```-- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.notebooks.azure.net```
+- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.azure.net```
- ```*.<per-workspace globally-unique identifier>.inference.<region the workspace was created in>.api.azureml.ms``` - Used by managed online endpoints **Azure China 21Vianet regions**:
The following list contains the fully qualified domain names (FQDNs) used by you
* `<workspace-GUID>.workspace.<region>.cert.api.azureml.ms` * `<workspace-GUID>.workspace.<region>.api.azureml.ms`
-* `ml-<workspace-name, truncated>-<region>-<workspace-guid>.notebooks.azure.net`
+* `ml-<workspace-name, truncated>-<region>-<workspace-guid>.<region>.notebooks.azure.net`
> [!NOTE] > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` at 63 characters or less.
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
For more information about the YAML schema, see the [online endpoint YAML refere
> [!NOTE] > To use Kubernetes instead of managed endpoints as a compute target:
-> 1. Create and attach your Kubernetes cluster as a compute target to your Azure Machine Learning workspace by using [Azure Machine Learning studio](how-to-attach-kubernetes-anywhere.md?&tabs=studio#attach-a-kubernetes-cluster-to-an-azureml-workspace).
+> 1. Create and attach your Kubernetes cluster as a compute target to your Azure Machine Learning workspace by using [Azure Machine Learning studio](how-to-attach-kubernetes-anywhere.md?&tabs=studio#attach-a-kubernetes-cluster-to-an-azure-ml-workspace).
> 1. Use the [endpoint YAML](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/managed/sample/endpoint.yml) to target Kubernetes instead of the managed endpoint YAML. You'll need to edit the YAML to change the value of `target` to the name of your registered compute target. You can use this [deployment.yaml](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/managed/sample/blue-deployment.yml) that has additional properties applicable to Kubernetes deployment. > > All the commands that are used in this article (except the optional SLA monitoring and Azure Log Analytics integration) can be used either with managed endpoints or with Kubernetes endpoints.
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Typically, converting to v2 will involve refactoring your code to use MLflow for
We recommend v2 for production model deployment. Managed endpoints abstract the IT overhead and provide a performant solution for deploying and scoring models, both for online (near real-time) and batch (massively parallel) scenarios.
-Kubernetes deployments are supported in v2 through AKS or Azure Arc, enabling Azure cloud and on-premise deployments managed by your organization.
+Kubernetes deployments are supported in v2 through AKS or Azure Arc, enabling Azure cloud and on-premises deployments managed by your organization.
### Machine learning operations (MLOps)
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
Title: Reference for configuring Kubernetes cluster for Azure Machine Learning (Preview)
+ Title: Reference for configuring Kubernetes cluster for Azure Machine Learning
description: Reference for configuring Kubernetes cluster for Azure Machine Learning.
Last updated 06/06/2022
-# Reference for configuring Kubernetes cluster for Azure Machine Learning (Preview)
+# Reference for configuring Kubernetes cluster for Azure Machine Learning
This article contains reference information that may be useful when [configuring Kubernetes with Azure Machine Learning](./how-to-attach-kubernetes-anywhere.md).
Upon AzureML extension deployment completes, it will create following resources
> [!NOTE] > * **{EXTENSION-NAME}:** is the extension name specified with ```az k8s-extension create --name``` CLI command. -
-## Create and use instance types for efficient compute resource usage
-
-### What are instance types?
-
-Instance types are an Azure Machine Learning concept that allows targeting certain types of
-compute nodes for training and inference workloads. For an Azure VM, an example for an
-instance type is `STANDARD_D2_V3`.
-
-In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the AzureML extension. Instance types are represented by two elements in AzureML extension:
-[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
-and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
-In short, a `nodeSelector` lets us specify which node a pod should run on. The node must have a
-corresponding label. In the `resources` section, we can set the compute resources (CPU, memory and
-NVIDIA GPU) for the pod.
-
-### Default instance type
-
-By default, a `defaultinstancetype` with following definition is created when you attach Kuberenetes cluster to AzureML workspace:
-- No `nodeSelector` is applied, meaning the pod can get scheduled on any node.-- The workload's pods are assigned default resources with 0.6 cpu cores, 1536Mi memory and 0 GPU:
-```yaml
-resources:
- requests:
- cpu: "0.6"
- memory: "1536Mi"
- limits:
- cpu: "0.6"
- memory: "1536Mi"
- nvidia.com/gpu: null
-```
-
-> [!NOTE]
-> - The default instance type purposefully uses little resources. To ensure all ML workloads
-run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types.
-> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK).
-> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section)
-
-### Create custom instance types
-
-To create a new instance type, create a new custom resource for the instance type CRD. For example:
-
-```bash
-kubectl apply -f my_instance_type.yaml
-```
-
-With `my_instance_type.yaml`:
-```yaml
-apiVersion: amlarc.azureml.com/v1alpha1
-kind: InstanceType
-metadata:
- name: myinstancetypename
-spec:
- nodeSelector:
- mylabel: mylabelvalue
- resources:
- limits:
- cpu: "1"
- nvidia.com/gpu: 1
- memory: "2Gi"
- requests:
- cpu: "700m"
- memory: "1500Mi"
-```
-
-The following steps will create an instance type with the labeled behavior:
-- Pods will be scheduled only on nodes with label `mylabel: mylabelvalue`.-- Pods will be assigned resource requests of `700m` CPU and `1500Mi` memory.-- Pods will be assigned resource limits of `1` CPU, `2Gi` memory and `1` NVIDIA GPU.-
-> [!NOTE]
-> - NVIDIA GPU resources are only specified in the `limits` section as integer values. For more information,
- see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins).
-> - CPU and memory resources are string values.
-> - CPU can be specified in millicores, for example `100m`, or in full numbers, for example `"1"`
- is equivalent to `1000m`.
-> - Memory can be specified as a full number + suffix, for example `1024Mi` for 1024 MiB.
-
-It's also possible to create multiple instance types at once:
-
-```bash
-kubectl apply -f my_instance_type_list.yaml
-```
-
-With `my_instance_type_list.yaml`:
-```yaml
-apiVersion: amlarc.azureml.com/v1alpha1
-kind: InstanceTypeList
-items:
- - metadata:
- name: cpusmall
- spec:
- resources:
- requests:
- cpu: "100m"
- memory: "100Mi"
- limits:
- cpu: "1"
- nvidia.com/gpu: 0
- memory: "1Gi"
-
- - metadata:
- name: defaultinstancetype
- spec:
- resources:
- requests:
- cpu: "1"
- memory: "1Gi"
- limits:
- cpu: "1"
- nvidia.com/gpu: 0
- memory: "1Gi"
-```
-
-The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition will override the `defaultinstancetype` definition created when Kubernetes cluster was attached to AzureML workspace.
-
-If a training or inference workload is submitted without an instance type, it uses the default
-instance type. To specify a default instance type for a Kubernetes cluster, create an instance
-type with name `defaultinstancetype`. It will automatically be recognized as the default.
-
-### Select instance type to submit training job
-
-To select an instance type for a training job using CLI (V2), specify its name as part of the
-`resources` properties section in job YAML. For example:
-```yaml
-command: python -c "print('Hello world!')"
-environment:
- image: library/python:latest
-compute: azureml:<compute_target_name>
-resources:
- instance_type: <instance_type_name>
-```
-
-In the above example, replace `<compute_target_name>` with the name of your Kubernetes compute
-target and `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system will use `defaultinstancetype` to submit job.
-
-### Select instance type to deploy model
-
-To select an instance type for a model deployment using CLI (V2), specify its name for `instance_type` property in deployment YAML. For example:
-
-```yaml
-name: blue
-app_insights_enabled: true
-endpoint_name: <endpoint name>
-model:
- path: ./model/sklearn_mnist_model.pkl
-code_configuration:
- code: ./script/
- scoring_script: score.py
-instance_type: <instance type name>
-environment:
- conda_file: file:./model/conda.yml
- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
-```
-
-In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system will use `defaultinstancetype` to deploy model.
-
-## AzureML jobs connect with on-premises data storage
+## AzureML jobs connect with custom data storage
[Persistent Volume (PV) and Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) are Kubernetes concept, allowing user to provide and consume various storage resources.
machine-learning Reference Yaml Deployment Kubernetes Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-kubernetes-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `code_configuration.scoring_script` | string | Relative path to the scoring file in the source code directory. | | | | `environment_variables` | object | Dictionary of environment variable key-value pairs to set in the deployment container. You can access these environment variables from your scoring scripts. | | | | `environment` | string or object | **Required.** The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | |
-| `instance_type` | string | The instance type used to place the inference workload. If omitted, the inference workload will be placed on the default instance type of the Kubernetes cluster specified in the endpoint's `compute` field. If specified, the inference workload will be placed on that selected instance type. <br><br> Note that the set of instance types for a Kubernetes cluster is configured via the Kubernetes cluster custom resource definition (CRD), hence they are not part of the Azure ML YAML schema for attaching Kubernetes compute.For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md). | | |
+| `instance_type` | string | The instance type used to place the inference workload. If omitted, the inference workload will be placed on the default instance type of the Kubernetes cluster specified in the endpoint's `compute` field. If specified, the inference workload will be placed on that selected instance type. <br><br> Note that the set of instance types for a Kubernetes cluster is configured via the Kubernetes cluster custom resource definition (CRD), hence they are not part of the Azure ML YAML schema for attaching Kubernetes compute.For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md#create-and-use-instance-types-for-efficient-compute-resource-utilization). | | |
| `instance_count` | integer | The number of instances to use for the deployment. Specify the value based on the workload you expect. This field is only required if you are using the `default` scale type (`scale_settings.type: default`). <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. | | | | `app_insights_enabled` | boolean | Whether to enable integration with the Azure Application Insights instance associated with your workspace. | | `false` | | `scale_settings` | object | The scale settings for the deployment. The two types of scale settings supported are the `default` scale type and the `target_utilization` scale type. <br><br> With the `default` scale type (`scale_settings.type: default`), you can manually scale the instance count up and down after deployment creation by updating the `instance_count` property. <br><br> To configure the `target_utilization` scale type (`scale_settings.type: target_utilization`), see [TargetUtilizationScaleSettings](#targetutilizationscalesettings) for the set of configurable properties. | | |
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
Azure Machine Learning makes it easy to connect to your data in the cloud. It pr
## Data workflow
-When you're ready to use the data in your cloud-based storage solution, we recommend the following data delivery workflow. This workflow assumes you have an [Azure storage account](/storage/common/storage-account-create.md?tabs=azure-portal) and data in a cloud-based storage service in Azure.
+When you're ready to use the data in your cloud-based storage solution, we recommend the following data delivery workflow. This workflow assumes you have an [Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) and data in a cloud-based storage service in Azure.
1. Create an [Azure Machine Learning datastore](#connect-to-storage-with-datastores) to store connection information to your Azure storage.
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
Last updated 11/02/2021-+ ms.devlang: azurecli # Quickstart: Configure a hybrid cluster with Azure Managed Instance for Apache Cassandra Azure Managed Instance for Apache Cassandra provides automated deployment and scaling operations for managed open-source Apache Cassandra datacenters. This service helps you accelerate hybrid scenarios and reduce ongoing maintenance.
-This quickstart demonstrates how to use the Azure CLI commands to configure a hybrid cluster. If you have existing datacenters in an on-premise or self-hosted environment, you can use Azure Managed Instance for Apache Cassandra to add other datacenters to that cluster and maintain them.
+This quickstart demonstrates how to use the Azure CLI commands to configure a hybrid cluster. If you have existing datacenters in an on-premises or self-hosted environment, you can use Azure Managed Instance for Apache Cassandra to add other datacenters to that cluster and maintain them.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] * This article requires the Azure CLI version 2.30.0 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
-* [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premise environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article.
+* [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premises environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article.
## <a id="create-account"></a>Configure a hybrid cluster
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
<!-- ![image](./media/configure-hybrid-cluster/subnet.png) --> > [!NOTE]
- > The Deployment of a Azure Managed Instance for Apache Cassandra requires internet access. Deployment fails in environments where internet access is restricted. Make sure you aren't blocking access within your VNet to the following vital Azure services that are necessary for Managed Cassandra to work properly. You can also find an extensive list of IP address and port dependencies [here](network-rules.md).
+ > The Deployment of a Azure Managed Instance for Apache Cassandra requires internet access. Deployment fails in environments where internet access is restricted. Make sure you aren't blocking access within your VNet to the following vital Azure services that are necessary for Managed Cassandra to work properly. You can also find an extensive list of IP address and port dependencies [here](network-rules.md).
> - Azure Storage > - Azure KeyVault > - Azure Virtual Machine Scale Sets
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
> [!NOTE] > The `assignee` and `role` values in the previous command are fixed service principle and role identifiers respectively.
-1. Next, we will configure resources for our hybrid cluster. Since you already have a cluster, the cluster name here will only be a logical resource to identify the name of your existing cluster. Make sure to use the name of your existing cluster when defining `clusterName` and `clusterNameOverride` variables in the following script.
-
+1. Next, we will configure resources for our hybrid cluster. Since you already have a cluster, the cluster name here will only be a logical resource to identify the name of your existing cluster. Make sure to use the name of your existing cluster when defining `clusterName` and `clusterNameOverride` variables in the following script.
+ You also need, at minimum, the seed nodes from your existing datacenter, and the gossip certificates required for node-to-node encryption. Azure Managed Instance for Apache Cassandra requires node-to-node encryption for communication between datacenters. If you do not have node-to-node encryption implemented in your existing cluster, you would need to implement it - see documentation [here](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLNodeToNode.html). You should supply the path to the location of the certificates. Each certificate should be in PEM format, e.g. `--BEGIN CERTIFICATE--\n...PEM format 1...\n--END CERTIFICATE--`. In general, there are two ways of implementing certificates: 1. Self signed certs. This means a private and public (no CA) certificate for each node - in this case we need all public certificates.
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
clusterNameOverride='cassandra-hybrid-cluster-illegal-name' location='eastus2' delegatedManagementSubnetId='/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>'
-
+ # You can override the cluster name if the original name is not legal for an Azure resource: # overrideClusterName='ClusterNameIllegalForAzureResource' # the default cassandra version will be v3.11
-
+ az managed-cassandra cluster create \ --cluster-name $clusterName \ --resource-group $resourceGroupName \
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
--external-seed-nodes 10.52.221.2 10.52.221.3 10.52.221.4 \ --external-gossip-certificates /usr/csuser/clouddrive/rootCa.pem /usr/csuser/clouddrive/gossipKeyStore.crt_signed # optional - add your existing datacenter's client-to-node certificates (if implemented):
- # --client-certificates /usr/csuser/clouddrive/rootCa.pem /usr/csuser/clouddrive/nodeKeyStore.crt_signed
+ # --client-certificates /usr/csuser/clouddrive/rootCa.pem /usr/csuser/clouddrive/nodeKeyStore.crt_signed
``` > [!NOTE]
- > If your cluster already has node-to-node and client-to-node encryption, you should know where your existing client and/or gossip SSL certificates are kept. If you are uncertain, you should be able to run `keytool -list -keystore <keystore-path> -rfc -storepass <password>` to print the certs.
+ > If your cluster already has node-to-node and client-to-node encryption, you should know where your existing client and/or gossip SSL certificates are kept. If you are uncertain, you should be able to run `keytool -list -keystore <keystore-path> -rfc -storepass <password>` to print the certs.
1. After the cluster resource is created, run the following command to get the cluster setup details: ```azurecli-interactive resourceGroupName='MyResourceGroup' clusterName='cassandra-hybrid-cluster'
-
+ az managed-cassandra cluster show \ --cluster-name $clusterName \ --resource-group $resourceGroupName \
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
dataCenterLocation='eastus2' virtualMachineSKU='Standard_D8s_v4' noOfDisksPerNode=4
-
+ az managed-cassandra datacenter create \ --resource-group $resourceGroupName \ --cluster-name $clusterName \ --data-center-name $dataCenterName \ --data-center-location $dataCenterLocation \ --delegated-subnet-id $delegatedManagementSubnetId \
- --node-count 9
+ --node-count 9
--sku $virtualMachineSKU \ --disk-capacity $noOfDisksPerNode \ --availability-zone false
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
> The value for `--sku` can be chosen from the following available SKUs: > > - Standard_E8s_v4
- > - Standard_E16s_v4
+ > - Standard_E16s_v4
> - Standard_E20s_v4
- > - Standard_E32s_v4
+ > - Standard_E32s_v4
> - Standard_DS13_v2 > - Standard_DS14_v2 > - Standard_D8s_v4 > - Standard_D16s_v4
- > - Standard_D32s_v4
- >
+ > - Standard_D32s_v4
+ >
> Note also that `--availability-zone` is set to `false`. To enable availability zones, set this to `true`. Availability zones increase the availability SLA of the service. For more details, review the full SLA details [here](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/). > [!WARNING]
- > Availability zones are not supported in all regions. Deployments will fail if you select a region where Availability zones are not supported. See [here](../availability-zones/az-overview.md#azure-regions-with-availability-zones) for supported regions. The successful deployment of availability zones is also subject to the availability of compute resources in all of the zones in the given region. Deployments may fail if the SKU you have selected, or capacity, is not available across all zones.
+ > Availability zones are not supported in all regions. Deployments will fail if you select a region where Availability zones are not supported. See [here](../availability-zones/az-overview.md#azure-regions-with-availability-zones) for supported regions. The successful deployment of availability zones is also subject to the availability of compute resources in all of the zones in the given region. Deployments may fail if the SKU you have selected, or capacity, is not available across all zones.
1. Now that the new datacenter is created, run the show datacenter command to view its details:
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
resourceGroupName='MyResourceGroup' clusterName='cassandra-hybrid-cluster' dataCenterName='dc1'
-
+ az managed-cassandra datacenter show \ --resource-group $resourceGroupName \ --cluster-name $clusterName \
- --data-center-name $dataCenterName
+ --data-center-name $dataCenterName
```
-1. The previous command outputs the new datacenter's seed nodes:
+1. The previous command outputs the new datacenter's seed nodes:
:::image type="content" source="./media/configure-hybrid-cluster/show-datacenter.png" alt-text="Get datacenter details." lightbox="./media/configure-hybrid-cluster/show-datacenter.png" border="true"::: <!-- ![image](./media/configure-hybrid-cluster/show-datacenter.png) -->
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
``` > [!NOTE]
- > If you want to add more datacenters, you can repeat the above steps, but you only need the seed nodes.
+ > If you want to add more datacenters, you can repeat the above steps, but you only need the seed nodes.
> [!IMPORTANT] > If your existing Apache Cassandra cluster only has a single data center, and this is the first time a data center is being added, ensure that the `endpoint_snitch` parameter in `cassandra.yaml` is set to `GossipingPropertyFileSnitch`.
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
```bash ALTER KEYSPACE "system_auth" WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'on-premise-dc': 3, 'managed-instance-dc': 3} ```
-
+ > [!IMPORTANT]
- > If you are using hybrid cluster as a method of migrating historic data into the new Azure Managed Instance Cassandra data centers, ensure that you run `nodetool repair --full` on all the nodes in your existing cluster's data center. You should run this only after all of the above steps have been taken. This should ensure that all historical data is replicated to your new data centers in Azure Managed Instance for Apache Cassandra. If you have a very large amount of data in your existing cluster, it may be necessary to run the repairs at the keyspace or even table level - see [here](https://cassandra.apache.org/doc/latest/cassandra/operating/repair.html) for more details on running repairs in Cassandra. Prior to changing the replication settings, you should also make sure that any application code that connects to your existing Cassandra cluster is using LOCAL_QUORUM. You should leave it at this setting during the migration (it can be switched back afterwards if required).
+ > If you are using hybrid cluster as a method of migrating historic data into the new Azure Managed Instance Cassandra data centers, ensure that you run `nodetool repair --full` on all the nodes in your existing cluster's data center. You should run this only after all of the above steps have been taken. This should ensure that all historical data is replicated to your new data centers in Azure Managed Instance for Apache Cassandra. If you have a very large amount of data in your existing cluster, it may be necessary to run the repairs at the keyspace or even table level - see [here](https://cassandra.apache.org/doc/latest/cassandra/operating/repair.html) for more details on running repairs in Cassandra. Prior to changing the replication settings, you should also make sure that any application code that connects to your existing Cassandra cluster is using LOCAL_QUORUM. You should leave it at this setting during the migration (it can be switched back afterwards if required).
## Troubleshooting If you encounter an error when applying permissions to your Virtual Network using Azure CLI, such as *Cannot find user or service principal in graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*, you can apply the same permission manually from the Azure portal. Learn how to do this [here](add-service-principal.md).
-> [!NOTE]
-> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
+> [!NOTE]
+> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
## Clean up resources
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
Last updated 11/02/2021-+ ms.devlang: azurecli
This quickstart demonstrates how to use the Azure CLI commands to create a clust
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
-* [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premise environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article.
+* [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premises environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article.
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
This quickstart demonstrates how to use the Azure CLI commands to create a clust
delegatedManagementSubnetId='/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.Network/virtualNetworks/<VNet name>/subnets/<subnet name>' initialCassandraAdminPassword='myPassword' cassandraVersion='3.11' # set to 4.0 for a Cassandra 4.0 cluster
-
+ az managed-cassandra cluster create \ --cluster-name $clusterName \ --resource-group $resourceGroupName \
This quickstart demonstrates how to use the Azure CLI commands to create a clust
dataCenterLocation='eastus2' virtualMachineSKU='Standard_D8s_v4' noOfDisksPerNode=4
-
+ az managed-cassandra datacenter create \ --resource-group $resourceGroupName \ --cluster-name $clusterName \
This quickstart demonstrates how to use the Azure CLI commands to create a clust
> The value for `--sku` can be chosen from the following available SKUs: > > - Standard_E8s_v4
- > - Standard_E16s_v4
+ > - Standard_E16s_v4
> - Standard_E20s_v4
- > - Standard_E32s_v4
+ > - Standard_E32s_v4
> - Standard_DS13_v2 > - Standard_DS14_v2 > - Standard_D8s_v4 > - Standard_D16s_v4
- > - Standard_D32s_v4
- >
+ > - Standard_D32s_v4
+ >
> Note also that `--availability-zone` is set to `false`. To enable availability zones, set this to `true`. Availability zones increase the availability SLA of the service. For more details, review the full SLA details [here](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/). > [!WARNING]
- > Availability zones are not supported in all regions. Deployments will fail if you select a region where Availability zones are not supported. See [here](../availability-zones/az-overview.md#azure-regions-with-availability-zones) for supported regions. The successful deployment of availability zones is also subject to the availability of compute resources in all of the zones in the given region. Deployments may fail if the SKU you have selected, or capacity, is not available across all zones.
+ > Availability zones are not supported in all regions. Deployments will fail if you select a region where Availability zones are not supported. See [here](../availability-zones/az-overview.md#azure-regions-with-availability-zones) for supported regions. The successful deployment of availability zones is also subject to the availability of compute resources in all of the zones in the given region. Deployments may fail if the SKU you have selected, or capacity, is not available across all zones.
1. Once the datacenter is created, if you want to scale up, or scale down the nodes in the datacenter, run the [az managed-cassandra datacenter update](/cli/azure/managed-cassandra/datacenter#az-managed-cassandra-datacenter-update) command. Change the value of `node-count` parameter to the desired value:
This quickstart demonstrates how to use the Azure CLI commands to create a clust
clusterName='<Cluster Name>' dataCenterName='dc1' dataCenterLocation='eastus2'
-
+ az managed-cassandra datacenter update \ --resource-group $resourceGroupName \ --cluster-name $clusterName \ --data-center-name $dataCenterName \
- --node-count 9
+ --node-count 9
``` ## Connect to your cluster
cqlsh $host 9042 -u cassandra -p $initial_admin_password --ssl
If you encounter an error when applying permissions to your Virtual Network using Azure CLI, such as *Cannot find user or service principal in graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*, you can apply the same permission manually from the Azure portal. Learn how to do this [here](add-service-principal.md).
-> [!NOTE]
-> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
+> [!NOTE]
+> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
## Clean up resources
managed-instance-apache-cassandra Create Multi Region Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-multi-region-cluster.md
Last updated 11/02/2021
- ignite-fall-2021 - mode-other-- devx-track-azurecli
+- devx-track-azurecli
- kr2b-contr-experiment ms.devlang: azurecli
ms.devlang: azurecli
Azure Managed Instance for Apache Cassandra provides automated deployment and scaling operations for managed open-source Apache Cassandra datacenters. This service helps you accelerate hybrid scenarios and reduce ongoing maintenance.
-This quickstart demonstrates how to use the Azure CLI commands to configure a multi-region cluster in Azure.
+This quickstart demonstrates how to use the Azure CLI commands to configure a multi-region cluster in Azure.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] * This article requires the Azure CLI version 2.30.0 or higher. If you're using Azure Cloud Shell, the latest version is already installed.
-* [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premise environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article.
+* [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premises environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article.
## Set up the network environment<a id="create-account"></a>
Because all datacenters provisioned with this service must be deployed into dedi
``` > [!NOTE]
- > We explicitly add different IP address ranges to ensure no errors when peering.
+ > We explicitly add different IP address ranges to ensure no errors when peering.
1. Peer the first VNet to the second VNet:
Because all datacenters provisioned with this service must be deployed into dedi
--vnet-name vnetEastUs \ --remote-vnet vnetEastUs2 \ --allow-vnet-access \
- --allow-forwarded-traffic
+ --allow-forwarded-traffic
``` > [!NOTE]
- > If you add more regions, each VNet requires peering from it to all other VNets, and from all other VNets to it.
+ > If you add more regions, each VNet requires peering from it to all other VNets, and from all other VNets to it.
1. Check the output of the previous command. Make sure the value of "peeringState" is now "Connected". You can also check this result by running the following command:
If you encounter errors when you run `az role assignment create`, you might not
location='eastus2' delegatedManagementSubnetId='/subscriptions/<SubscriptionID>/resourceGroups/cassandra-mi-multi-region/providers/Microsoft.Network/virtualNetworks/vnetEastUs2/subnets/dedicated-subnet' initialCassandraAdminPassword='myPassword'
-
+ az managed-cassandra cluster create \ --cluster-name $clusterName \ --resource-group $resourceGroupName \
If you encounter errors when you run `az role assignment create`, you might not
dataCenterName='dc-eastus2' dataCenterLocation='eastus2' delegatedManagementSubnetId='/subscriptions/<SubscriptionID>/resourceGroups/cassandra-mi-multi-region/providers/Microsoft.Network/virtualNetworks/vnetEastUs2/subnets/dedicated-subnet'
-
+ az managed-cassandra datacenter create \ --resource-group $resourceGroupName \ --cluster-name $clusterName \
If you encounter errors when you run `az role assignment create`, you might not
delegatedManagementSubnetId='/subscriptions/<SubscriptionID>/resourceGroups/cassandra-mi-multi-region/providers/Microsoft.Network/virtualNetworks/vnetEastUs/subnets/dedicated-subnet' virtualMachineSKU='Standard_D8s_v4' noOfDisksPerNode=4
-
+ az managed-cassandra datacenter create \ --resource-group $resourceGroupName \ --cluster-name $clusterName \ --data-center-name $dataCenterName \ --data-center-location $dataCenterLocation \ --delegated-subnet-id $delegatedManagementSubnetId \
- --node-count 3
+ --node-count 3
--sku $virtualMachineSKU \ --disk-capacity $noOfDisksPerNode \ --availability-zone false
If you encounter errors when you run `az role assignment create`, you might not
> The value for `--sku` can be chosen from the following available SKUs: > > * Standard_E8s_v4
- > * Standard_E16s_v4
+ > * Standard_E16s_v4
> * Standard_E20s_v4
- > * Standard_E32s_v4
+ > * Standard_E32s_v4
> * Standard_DS13_v2 > * Standard_DS14_v2 > * Standard_D8s_v4 > * Standard_D16s_v4
- > * Standard_D32s_v4
+ > * Standard_D32s_v4
> > Note also that `--availability-zone` is set to `false`. To enable availability zones, set this to `true`. Availability zones increase the availability SLA of the service. For more information, see [SLA for Azure Managed Instance for Apache Cassandra](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/).
If you encounter errors when you run `az role assignment create`, you might not
```azurecli-interactive resourceGroupName='cassandra-mi-multi-region' clusterName='test-multi-region'
-
+ az managed-cassandra cluster node-status \ --cluster-name $clusterName \ --resource-group $resourceGroupName
If you encounter errors when you run `az role assignment create`, you might not
If you encounter an error when applying permissions to your Virtual Network using Azure CLI, you can apply the same permission manually from the Azure portal. An example error might be *Cannot find user or service principal in graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*. For more information, see [Use Azure portal to add Cosmos DB service principal](add-service-principal.md).
-> [!NOTE]
-> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
+> [!NOTE]
+> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
## Clean up resources
managed-instance-apache-cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/faq.md
Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB
### Is Azure Managed Instance for Apache Cassandra dependent on Azure Cosmos DB?
-No, there's no architectural dependency between Azure Managed Instance for Apache Cassandra and the Azure Cosmos DB backend.
+No, there's no architectural dependency between Azure Managed Instance for Apache Cassandra and the Azure Cosmos DB backend.
### What versions of Apache Cassandra does the service support?
-The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
+The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
### Does Azure Managed Instance for Apache Cassandra have an SLA?
-Yes, the SLA is published [here](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/).
+Yes, the SLA is published [here](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/).
### Can I deploy Azure Managed Instance for Apache Cassandra in any region?
Azure Managed Instance for Apache Cassandra supports all of the features in Apac
### Can I pair an on-premises Apache Cassandra cluster with the Azure Managed Instance for Apache Cassandra?
-Yes, you can configure a hybrid cluster with Azure Virtual Network injected data-centers deployed by the service. Managed Instance data-centers can communicate with on-premise data-centers within the same cluster ring.
+Yes, you can configure a hybrid cluster with Azure Virtual Network injected data-centers deployed by the service. Managed Instance data-centers can communicate with on-premises data-centers within the same cluster ring.
### Where can I give feedback on Azure Managed Instance for Apache Cassandra features?
managed-instance-apache-cassandra Management Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/management-operations.md
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
## Compaction
-* The system currently doesn't perform a major compaction.
+* The system currently doesn't perform a major compaction.
* Repair (see [Maintenance](#maintenance)) performs a Merkle tree compaction, which is a special kind of compaction. * Depending on the compaction strategy on the keyspace, Cassandra automatically compacts when the keyspace reaches a specific size. We recommend that you carefully select a compaction strategy for your workload, and don't do any manual compactions outside the strategy.
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
* During patching, machines are rebooted one rack at a time. You shouldn't experience any degradation at the application side as long as **quorum ALL setting is not being used**, and the replication factor is **3 or higher**.
-* The version in Apache Cassandra is in the format `X.Y.Z`. You can control the deployment of major (X) and minor (Y) versions manually via service tools. Whereas the Cassandra patches (Z) that may be required for that major/minor version combination are done automatically.
+* The version in Apache Cassandra is in the format `X.Y.Z`. You can control the deployment of major (X) and minor (Y) versions manually via service tools. Whereas the Cassandra patches (Z) that may be required for that major/minor version combination are done automatically.
>[!NOTE] > The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
## Support
-Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/) for the availability of data centers in a managed cluster. If you encounter any issues with using the service, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/) for the availability of data centers in a managed cluster. If you encounter any issues with using the service, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
>[!IMPORTANT] > We will attempt to investigate and diagnose any issues reported via support case, and resolve or mitigate where possible. However, you are ultimately responsible for any Apache Cassandra configuration level usage which causes CPU, disk, or network problems.
Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.micr
> * Throughput that exceeds capacity. > * Ingesting data that exceeds storage capacity. > * Incorrect keyspace configuration settings.
-> * Poor data model or partition key strategy.
+> * Poor data model or partition key strategy.
>
-> In the event that we investigate a support case and discover that the root cause of the issue is at the Apache Cassandra configuration level (and not any underlying platform level aspects we maintain), the case may be closed. Where possible, we will also provide recommendations and guidance on remediation. We therefore recommend you [enable metrics](visualize-prometheus-grafana.md) and/or become familiar with our [Azure monitor integration](monitor-clusters.md ) in order to prevent common application/configuration level issues in Apache Cassandra, such as the above.
+> In the event that we investigate a support case and discover that the root cause of the issue is at the Apache Cassandra configuration level (and not any underlying platform level aspects we maintain), the case may be closed. Where possible, we will also provide recommendations and guidance on remediation. We therefore recommend you [enable metrics](visualize-prometheus-grafana.md) and/or become familiar with our [Azure monitor integration](monitor-clusters.md ) in order to prevent common application/configuration level issues in Apache Cassandra, such as the above.
>[!WARNING] > Azure Managed Instance for Apache Cassandra also let's you run `nodetool` and `sstable` commands for routine DBA administration - see article [here](dba-commands.md). Some of these commands can destabilize the cassandra cluster and should only be run carefully and after being tested in non-production environments. Where possible, a `--dry-run` option should be deployed first. Microsoft cannot offer any SLA or support on issues with running commands which alter the default database configuration and/or tables.
Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.micr
Snapshot backups are enabled by default and taken every 4 hours with [Medusa](https://github.com/thelastpickle/cassandra-medusa). Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There's no cost for backups. To restore from a backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. > [!WARNING]
-> Backups can be restored to the same VNet/subnet as your existing cluster, but they cannot be restored to the *same cluster*. Backups can only be restored to **new clusters**. Backups are intended for accidental deletion scenarios, and are not geo-redundant. They are therefore not recommended for use as a disaster recovery (DR) strategy in case of a total regional outage. To safeguard against region-wide outages, we recommend a multi-region deployment. Take a look at our [quickstart for multi-region deployments](create-multi-region-cluster.md).
+> Backups can be restored to the same VNet/subnet as your existing cluster, but they cannot be restored to the *same cluster*. Backups can only be restored to **new clusters**. Backups are intended for accidental deletion scenarios, and are not geo-redundant. They are therefore not recommended for use as a disaster recovery (DR) strategy in case of a total regional outage. To safeguard against region-wide outages, we recommend a multi-region deployment. Take a look at our [quickstart for multi-region deployments](create-multi-region-cluster.md).
## Security
For more information on security features, see our article [here](security.md).
## Hybrid support
-When a [hybrid](configure-hybrid-cluster.md) cluster is configured, automated reaper operations running in the service will benefit the whole cluster. This includes data centers that aren't provisioned by the service. Outside this, it is your responsibility to maintain your on-premise or externally hosted data center.
+When a [hybrid](configure-hybrid-cluster.md) cluster is configured, automated reaper operations running in the service will benefit the whole cluster. This includes data centers that aren't provisioned by the service. Outside this, it is your responsibility to maintain your on-premises or externally hosted data center.
## Next steps
marketplace Marketplace Metering Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-metering-service-authentication.md
description: Metering service authentication strategies supported in Azure Marke
Previously updated : 06/01/2021 Last updated : 06/13/2022
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
This article answers common questions about the Azure Migrate: Server Migration
The Azure Migrate: Server Migration tool offers two options for migrating your source servers and virtual machines to Azure: agentless migration and agent-based migration.
-Regardless of the migration option chosen, the first step to migrate a server using Azure Migration: Server Migration is to start replication for the server. This performs an initial replication of your VM/server data to Azure. After the initial replication is completed, an ongoing replication (ongoing delta-sync) is established to migrate incremental data to Azure. Once the operation reaches the delta-sync stage, you can choose to migrate to Azure at any time.
+Regardless of the migration option chosen, the first step to migrate a server using Azure Migration: Server Migration is to start replication for the server. This performs an initial replication of your VM/server data to Azure. After the initial replication is completed, an ongoing replication (ongoing delta-sync) is established to migrate incremental data to Azure. Once the operation reaches the delta-sync stage, you can choose to migrate to Azure at any time.
Here are some considerations to keep in mind while deciding on the migration option.
While you can create assessments for multiple regions in an Azure Migrate projec
- For agent-based migrations (VMware, physical servers, and servers from other clouds), the target region is locked once the ΓÇ£Create ResourcesΓÇ¥ button is clicked on the portal while setting up the replication appliance. - For agentless Hyper-V migrations, the target region is locked once the ΓÇ£Create ResourcesΓÇ¥ button is clicked on the portal while setting up the Hyper-V replication provider.
-### Can I use the same Azure Migrate project to migrate to multiple subscriptions?
+### Can I use the same Azure Migrate project to migrate to multiple subscriptions?
Yes, you can migrate to multiple subscriptions (same Azure tenant) in the same target region for an Azure Migrate project. You can select the target subscription while enabling replication for a machine or a set of machines. The target region is locked post first replication for agentless VMware migrations and during the replication appliance and Hyper-V provider installation for agent-based migrations and agentless Hyper-V migrations respectively. ### How is the data transmitted from on-prem environment to Azure? Is it encrypted before transmission?
-The Azure Migrate appliance in the agentless replication case compresses data and encrypts before uploading. Data is transmitted over a secure communication channel over https and uses TLS 1.2 or later. Additionally, Azure Storage automatically encrypts your data when it is persisted it to the cloud (encryption-at-rest).
+The Azure Migrate appliance in the agentless replication case compresses data and encrypts before uploading. Data is transmitted over a secure communication channel over https and uses TLS 1.2 or later. Additionally, Azure Storage automatically encrypts your data when it is persisted it to the cloud (encryption-at-rest).
### Can I use the recovery services vault created by Azure Migrate for Disaster Recovery scenarios? We do not recommend using the recovery services vault created by Azure Migrate for Disaster Recovery scenarios. Doing so can result in start replication failures in Azure Migrate. ### What is the difference between the Test Migration and Migrate operations?
-Test migration provides a way to test and validate migrations prior to the actual migration. Test migration works by letting you use a sandbox environment in Azure to test the virtual machines before actual migration. The sandbox environment is demarcated by a test virtual network you specify. The test migration operation is non-disruptive, provided the test VNet is sufficiently isolated. Isolated VNet here means the inbound and outbound connection rules are designed to avoid unwanted connections. For example ΓÇô connection to On-premise machines is restricted.
+Test migration provides a way to test and validate migrations prior to the actual migration. Test migration works by letting you use a sandbox environment in Azure to test the virtual machines before actual migration. The sandbox environment is demarcated by a test virtual network you specify. The test migration operation is non-disruptive, provided the test VNet is sufficiently isolated. Isolated VNet here means the inbound and outbound connection rules are designed to avoid unwanted connections. For example ΓÇô connection to on-premises machines is restricted.
The applications can continue to run at the source while letting you perform tests on a cloned copy in an isolated sandbox environment. You can perform multiple tests, as needed, to validate the migration, perform app testing, and address any issues before the actual migration.
The applications can continue to run at the source while letting you perform tes
### Is there a Rollback option for Azure Migrate?
-You can use the Test Migration option to validate your application functionality and performance in Azure. You can perform any number of test migrations and can execute the final migration after establishing confidence through the test migration operation.
-A test migration doesnΓÇÖt impact the on-premises machine, which remains operational and continues replicating until you perform the actual migration. If there were any errors during the test migration UAT, you can choose to postpone the final migration and keep your source VM/server running and replicating to Azure. You can reattempt the final migration once you resolve the errors.
+You can use the Test Migration option to validate your application functionality and performance in Azure. You can perform any number of test migrations and can execute the final migration after establishing confidence through the test migration operation.
+A test migration doesnΓÇÖt impact the on-premises machine, which remains operational and continues replicating until you perform the actual migration. If there were any errors during the test migration UAT, you can choose to postpone the final migration and keep your source VM/server running and replicating to Azure. You can reattempt the final migration once you resolve the errors.
Note: Once you have performed a final migration to Azure and the on-premises source machine was shut down, you cannot perform a rollback from Azure to your on-premises environment. ### Can I select the Virtual Network and subnet to use for test migrations?
To [migrate VMware VMs](server-migrate-overview.md) using VMware agent-based or
### Can I consolidate multiple source VMs into one VM while migrating?
-Azure Migrate server migration capabilities support like for like migrations currently. We do not support consolidating servers or upgrading the operating system as part of the migration.
+Azure Migrate server migration capabilities support like for like migrations currently. We do not support consolidating servers or upgrading the operating system as part of the migration.
### Will Windows Server 2008 and 2008 R2 be supported in Azure after migration?
You can estimate the bandwidth or time needed for agentless VMware VM migration
Time to complete initial replication = {size of disks (or used size if available) * 0.7 (assuming a 30 percent compression average ΓÇô conservative estimate)}/bandwidth available for replication.
-### How do I throttle replication in using Azure Migrate appliance for agentless VMware replication?
+### How do I throttle replication in using Azure Migrate appliance for agentless VMware replication?
You can throttle using NetQosPolicy. Note that this throttling is applicable only to the outbound connections from the Azure Migrate appliance. For example:
$ThrottleBandwidthTrigger = New-ScheduledTaskTrigger -Weekly -DaysOfWeek Monday,
$IncreaseBandwidthTrigger = New-ScheduledTaskTrigger -Weekly -DaysOfWeek Monday,Tuesday,Wednesday,Thursday,Friday -At 6:00pm #Setting the task action to execute the scripts
-$ThrottleBandwidthAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "-executionpolicy bypass -noprofile -file $ThrottleBandwidthScript"
-$IncreaseBandwidthAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "-executionpolicy bypass -noprofile -file $IncreaseBandwidthScript"
+$ThrottleBandwidthAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "-executionpolicy bypass -noprofile -file $ThrottleBandwidthScript"
+$IncreaseBandwidthAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "-executionpolicy bypass -noprofile -file $IncreaseBandwidthScript"
#Creating the Scheduled tasks Register-ScheduledTask -TaskName $ThrottleBandwidthTask -Trigger $ThrottleBandwidthTrigger -User $User -Action $ThrottleBandwidthAction -RunLevel Highest -Force
Review the [article](./tutorial-migrate-aws-virtual-machines.md) to discover, as
In addition to agentless migration options for VMware virtual machines and Hyper-V virtual machines, the Server Migration tool provides an agent-based migration option to migrate Windows and Linux servers running on physical servers, or running as x86/x64 virtual machines on VMware, Hyper-V, AWS, Google Cloud Platform, etc.
-The agent-based migration method uses agent software installed on the server being migrated to replicate server data to Azure. The replication process uses an offload architecture in which the agent relays replication data to a dedicated replication server called the replication appliance or Configuration Server (or to a scale-out Process Server). [Learn more](./agent-based-migration-architecture.md) about how the agent-based migration option works.
+The agent-based migration method uses agent software installed on the server being migrated to replicate server data to Azure. The replication process uses an offload architecture in which the agent relays replication data to a dedicated replication server called the replication appliance or Configuration Server (or to a scale-out Process Server). [Learn more](./agent-based-migration-architecture.md) about how the agent-based migration option works.
Note: The replication appliance is different from the Azure Migrate discovery appliance and must be installed on a separate/dedicated machine.
migrate Concepts Azure Webapps Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-webapps-assessment-calculation.md
Azure App Service readiness for web apps is based on feature compatibility check
### Azure App Service SKU After the assessment determines the readiness based on configuration data, it determines the Azure App Service SKU that is suitable for running your apps in Azure App Service.
-Premium plans are for production workloads and run on dedicated Virtual Machine instances. Each instance can support multiple applications and domains. The Isolated plans host your apps in a private, dedicated Azure environment and are ideal for apps that require secure connections with your on-premise network.
+Premium plans are for production workloads and run on dedicated Virtual Machine instances. Each instance can support multiple applications and domains. The Isolated plans host your apps in a private, dedicated Azure environment and are ideal for apps that require secure connections with your on-premises network.
> [!NOTE] > Currently, Azure Migrate only recommends I1, P1v2, and P1v3 SKUs. There are more SKUs available in Azure App service. [Learn more](https://azure.microsoft.com/pricing/details/app-service/windows/).
P1v3 | 16
> Your App Service plan can be scaled up and down at any time. [Learn more](../app-service/overview-hosting-plans.md#what-if-my-app-needs-more-capabilities-or-features). ## Next steps-- [Review](best-practices-assessment.md) best practices for creating assessments.
+- [Review](best-practices-assessment.md) best practices for creating assessments.
- Learn how to run an [Azure App Service assessment](how-to-create-azure-app-service-assessment.md).
migrate Concepts Migration Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-planning.md
Title: Build a migration plan with Azure Migrate
+ Title: Build a migration plan with Azure Migrate
description: Provides guidance on building a migration plan with Azure Migrate.
Follow this article to build your migration plan to Azure with [Azure Migrate](m
## Define cloud migration goals
-Before you start, understanding and evaluating your [motivation](/azure/cloud-adoption-framework/strategy/motivations) for moving to the cloud can contribute to a successful business outcome. As explained in the [Cloud Adoption Framework](/azure/cloud-adoption-framework), there are a number of triggers and outcomes.
+Before you start, understanding and evaluating your [motivation](/azure/cloud-adoption-framework/strategy/motivations) for moving to the cloud can contribute to a successful business outcome. As explained in the [Cloud Adoption Framework](/azure/cloud-adoption-framework), there are a number of triggers and outcomes.
**Business event** | **Migration outcome** |
-Datacenter exit | Cost
+Datacenter exit | Cost
Merger, acquisition, or divestiture | Reduction in vendor/technical complexity Reduction in capital expenses | Optimization of internal operations End of support for mission-critical technologies | Increase in business agility
Start by identifying your on-premises infrastructure, applications, and dependen
### Workloads in use
-Azure Migrate uses a lightweight Azure Migrate appliance to perform agentless discovery of on-premises VMware VMs, Hyper-V VMs, other virtualized servers, and physical servers. Continuous discovery collects server configuration information, and performance metadata, as well as application data. Here's what the appliance collects from on-premises servers:
+Azure Migrate uses a lightweight Azure Migrate appliance to perform agentless discovery of on-premises VMware VMs, Hyper-V VMs, other virtualized servers, and physical servers. Continuous discovery collects server configuration information, and performance metadata, as well as application data. Here's what the appliance collects from on-premises servers:
- Server, disk, and NIC metadata.
After collecting data, you can export the application inventory list to find app
![Application inventory export](./media/concepts-migration-planning/application-inventory-export.png)
-Along with data discovered with the Discovery and assessment tool, you can use your Configuration Management Database (CMDB) data to build a view of your server and database estate, and to understand how your servers are distributed across business units, application owners, geographies, etc. This helps decide which workloads to prioritize for migration.
+Along with data discovered with the Discovery and assessment tool, you can use your Configuration Management Database (CMDB) data to build a view of your server and database estate, and to understand how your servers are distributed across business units, application owners, geographies, etc. This helps decide which workloads to prioritize for migration.
### Dependencies between workloads
Azure provides flexibility to resize your cloud capacity over time, and migratio
You can export the assessment report, and filter on these categories to understand Azure readiness: -- **Ready for Azure**: Servers can be migrated as-is to Azure, without any changes.
+- **Ready for Azure**: Servers can be migrated as-is to Azure, without any changes.
- **Conditionally ready for Azure**: Servers can be migrated to Azure, but need minor changes, in accordance with the remediation guidance provided in the assessment.-- **Not ready for Azure**: Servers can't be migrated to Azure as-is. Issues must be fixed in accordance with remediation guidance, before migration.
+- **Not ready for Azure**: Servers can't be migrated to Azure as-is. Issues must be fixed in accordance with remediation guidance, before migration.
- **Readiness unknown**: Azure Migrate can't determine server readiness, because of insufficient metadata. Using database assessments, you can assess the readiness of your SQL Server data estate for migration to Azure SQL Database, or Azure SQL Managed Instances. The assessment shows migration readiness status percentage for each of your SQL server instances. In addition, for each instance you can see the recommended target in Azure, potential migration blockers, a count of breaking changes, readiness for Azure SQL DB or Azure SQL VM, and a compatibility level. You can dig deeper to understand the impact of migration blockers, and recommendations for fixing them.
Using database assessments, you can assess the readiness of your SQL Server data
### Sizing Recommendations
-After a server is marked as ready for Azure, Discovery and assessment makes sizing recommendations that identify the Azure VM SKU and disk type for your servers. You can get sizing recommendations based on performance history (to optimize resources as you migrate), or based on on-premise server settings, without performance history. In a database assessment, you can see recommendations for the database SKU, pricing tier, and compute level.
+After a server is marked as ready for Azure, Discovery and assessment makes sizing recommendations that identify the Azure VM SKU and disk type for your servers. You can get sizing recommendations based on performance history (to optimize resources as you migrate), or based on on-premises server settings, without performance history. In a database assessment, you can see recommendations for the database SKU, pricing tier, and compute level.
### Get compute costs
-Performance-based sizing option in Azure Migrate assessments helps you to right-size VMs, and should be used as a best practice for optimizing workloads in Azure. In addition to right-sizing, there are a few other options to help save Azure costs:
+Performance-based sizing option in Azure Migrate assessments helps you to right-size VMs, and should be used as a best practice for optimizing workloads in Azure. In addition to right-sizing, there are a few other options to help save Azure costs:
- **Reserved Instances**: With [reserved instances(RI)](https://azure.microsoft.com/pricing/reserved-vm-instances/), you can significantly reduce costs compared to [pay-as-you-go pricing](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/). - **Azure Hybrid Benefit**: With [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/), you can bring on-premises Windows Server licenses with active Software Assurance, or Linux subscriptions, to Azure, and combine with reserved instances options. - **Enterprise Agreement**: Azure [Enterprise Agreements (EA)](../cost-management-billing/manage/ea-portal-agreements.md) can offer savings for Azure subscriptions and services. - **Offers**: There are multiple [Azure Offers](https://azure.microsoft.com/support/legal/offer-details/). For example, [Pay-As-You-Go Dev/Test](https://azure.microsoft.com/pricing/dev-test/), or [Enterprise Dev/Test offer](https://azure.microsoft.com/offers/ms-azr-0148p/), to provide lower rates for dev/test VMs - **VM uptime**: You can review days per month and hours per day in which Azure VMs run. Shutting off servers when they're not in use can reduce your costs (not applicable for RIs).-- **Target region**: You can create assessments in different regions, to figure out whether migrating to a specific region might be more cost effective.
+- **Target region**: You can create assessments in different regions, to figure out whether migrating to a specific region might be more cost effective.
### Visualize data
-You can view Discovery and assessment reports (with Azure readiness information, and monthly cost distribution) in the portal. You can also export assessment, and enrich your migration plan with additional visualizations. You can create multiple assessments, with different combinations of properties, and choose the set of properties that work best for your business.
+You can view Discovery and assessment reports (with Azure readiness information, and monthly cost distribution) in the portal. You can also export assessment, and enrich your migration plan with additional visualizations. You can create multiple assessments, with different combinations of properties, and choose the set of properties that work best for your business.
![Assessments overview](./media/concepts-migration-planning/assessment-summary.png) ### Evaluate gaps/blockers
-As you figure out the apps and workloads you want to migrate, identify downtime constraints for them, and look for any operational dependencies between your apps and the underlying infrastructure. This analysis helps you to plan migrations that meet your recovery time objective (RTO), and ensure minimal to zero data loss. Before you migrate, we recommend that you review and mitigate any compatibility issues, or unsupported features, that may block server/SQL database migration. The Azure Migrate Discovery and assessment report, and Azure Migrate Database Assessment, can help with this.
+As you figure out the apps and workloads you want to migrate, identify downtime constraints for them, and look for any operational dependencies between your apps and the underlying infrastructure. This analysis helps you to plan migrations that meet your recovery time objective (RTO), and ensure minimal to zero data loss. Before you migrate, we recommend that you review and mitigate any compatibility issues, or unsupported features, that may block server/SQL database migration. The Azure Migrate Discovery and assessment report, and Azure Migrate Database Assessment, can help with this.
### Prioritize workloads After you've collected information about your inventory, you can identify which apps and workloads to migrate first. Develop an ΓÇ£apply and learnΓÇ¥ approach to migrate apps in a systematic and controllable way, so that you can iron out any flaws before starting a full-scale migration.
-To prioritize migration order, you can use strategic factors such as complexity, time-to-migrate, business urgency, production/non-production considerations, compliance, security requirements, application knowledge, etc.
+To prioritize migration order, you can use strategic factors such as complexity, time-to-migrate, business urgency, production/non-production considerations, compliance, security requirements, application knowledge, etc.
A few recommendations:
A few recommendations:
**Under-provisioned servers** | Export the assessment report, and filter for servers with low CPU utilization (%) and memory utilization (%). Migrate to a right-sized Azure VM, and save on costs for underutilized resources. **Over-provisioned servers** | Export the assessment report and filter for servers with high CPU utilization (%) and memory utilization (%). Solve capacity constraints, prevent overstrained servers from breaking, and increase performance by migrating these servers to Azure. In Azure, use autoscaling capabilities to meet demand.<br/><br/> Analyze assessment reports to investigate storage constraints. Analyze disk IOPS and throughput, and the recommended disk type. -- **Start small, then go big**: Start by moving apps and workloads that present minimal risk and complexity, to build confidence in your migration strategy. Analyze Azure Migrate assessment recommendations together with your CMDB repository, to find and migrate dev/test workloads that might be candidates for pilot migrations. Feedback and learnings from pilot migrations can be helpful as you begin migrating production workloads. -- **Comply**: Azure maintains the largest compliance portfolio in the industry, in terms of breadth and depth of offerings. Use compliance requirements to prioritize migrations, so that apps and workloads comply with your national, regional, and industry-specific standards and laws. This is especially true for organizations that deal with business-critical process, hold sensitive information, or are in heavily regulated industries. In these types of organizations, standards and regulations abound, and might change often, being difficult to keep up with.
+- **Start small, then go big**: Start by moving apps and workloads that present minimal risk and complexity, to build confidence in your migration strategy. Analyze Azure Migrate assessment recommendations together with your CMDB repository, to find and migrate dev/test workloads that might be candidates for pilot migrations. Feedback and learnings from pilot migrations can be helpful as you begin migrating production workloads.
+- **Comply**: Azure maintains the largest compliance portfolio in the industry, in terms of breadth and depth of offerings. Use compliance requirements to prioritize migrations, so that apps and workloads comply with your national, regional, and industry-specific standards and laws. This is especially true for organizations that deal with business-critical process, hold sensitive information, or are in heavily regulated industries. In these types of organizations, standards and regulations abound, and might change often, being difficult to keep up with.
## Finalize the migration plan
-Before finalizing your migration plan, make sure you consider and mitigate other potential blockers, as follows:
+Before finalizing your migration plan, make sure you consider and mitigate other potential blockers, as follows:
- **Network requirements**: Evaluate network bandwidth and latency constraints, which might cause unforeseen delays and disruptions to migration replication speed. - **Testing/post-migration tweaks**: Allow a time buffer to conduct performance and user acceptance testing for migrated apps, or to configure/tweak apps post-migration, such as updating database connection strings, configuring web servers, performing cut-overs/cleanup etc. - **Permissions**: Review recommended Azure permissions, and server/database access roles and permissions needed for migration.-- **Training**: Prepare your organization for the digital transformation. A solid training foundation is important for successful organizational change. Check out free training on [Microsoft Learn](/learn/azure/?ocid=CM_Discovery_Checklist_PDF), including courses on Azure fundamentals, solution architectures, and security. Encourage your team to exploreΓÇ»[Azure certifications](https://www.microsoft.com/learning/certification-overview.aspx?ocid=CM_Discovery_Checklist_PDF).ΓÇ» -- **Implementation support**: Get support for your implementation if you need it. Many organizations opt for outside help to support their cloud migration. To move to Azure quickly and confidently with personalized assistance, consider anΓÇ»[Azure Expert Managed Service Provider](https://www.microsoft.com/solution-providers/search?cacheId=9c2fed4f-f9e2-42fb-8966-4c565f08f11e&ocid=CM_Discovery_Checklist_PDF), orΓÇ»[FastTrack for Azure](https://azure.microsoft.com/programs/azure-fasttrack/?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
+- **Training**: Prepare your organization for the digital transformation. A solid training foundation is important for successful organizational change. Check out free training on [Microsoft Learn](/learn/azure/?ocid=CM_Discovery_Checklist_PDF), including courses on Azure fundamentals, solution architectures, and security. Encourage your team to exploreΓÇ»[Azure certifications](https://www.microsoft.com/learning/certification-overview.aspx?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
+- **Implementation support**: Get support for your implementation if you need it. Many organizations opt for outside help to support their cloud migration. To move to Azure quickly and confidently with personalized assistance, consider anΓÇ»[Azure Expert Managed Service Provider](https://www.microsoft.com/solution-providers/search?cacheId=9c2fed4f-f9e2-42fb-8966-4c565f08f11e&ocid=CM_Discovery_Checklist_PDF), orΓÇ»[FastTrack for Azure](https://azure.microsoft.com/programs/azure-fasttrack/?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
-Create an effective cloud migration plan that includes detailed information about the apps you want to migrate, app/database availability, downtime constraints, and migration milestones. The plan considers how long the data copy will take, and include a realistic buffer for post-migration testing, and cut-over activities.
+Create an effective cloud migration plan that includes detailed information about the apps you want to migrate, app/database availability, downtime constraints, and migration milestones. The plan considers how long the data copy will take, and include a realistic buffer for post-migration testing, and cut-over activities.
-A post-migration testing plan should include functional, integration, security, and performance testing and use cases, to ensure that migrated apps work as expected, and that all database objects, and data relationships, are transferred successfully to the cloud.
+A post-migration testing plan should include functional, integration, security, and performance testing and use cases, to ensure that migrated apps work as expected, and that all database objects, and data relationships, are transferred successfully to the cloud.
-Build a migration roadmap, and declare a maintenance window to migrate your apps and databases with minimal to zero downtime, and limit the potential operational and business impact during migration.
+Build a migration roadmap, and declare a maintenance window to migrate your apps and databases with minimal to zero downtime, and limit the potential operational and business impact during migration.
## Migrate
We recommend that you run a test migration in Azure Migrate, before starting a f
When you're ready for migration, use the Azure Migrate: Server Migration tool, and the Azure Data Migration Service (DMS), for a seamless and integrated migration experience, with end-to-end tracking. - With the Server Migration tool, you can migrate on-premises VMs and servers, or VMs located in other private or public cloud (including AWS, GCP) with around zero downtime.-- Azure DMS provides a fully managed service that's designed to enable seamless migrations from multiple database sources to Azure Data platforms, with minimal downtime.
+- Azure DMS provides a fully managed service that's designed to enable seamless migrations from multiple database sources to Azure Data platforms, with minimal downtime.
## Next steps
migrate Migrate Replication Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-replication-appliance.md
NIC type | VMXNET3 (if the appliance is a VMware VM)
| **Hardware settings** CPU cores | 8 RAM | 16 GB
-Number of disks | Three: The OS disk, process server cache disk, and retention drive.
+Number of disks | Two: The OS disk and the process server cache disk.
Free disk space (cache) | 600 GB
-Free disk space (retention disk) | 600 GB
**Software settings** | Operating system | Windows Server 2016 or Windows Server 2012 R2 License | The appliance comes with a Windows Server 2016 evaluation license, which is valid for 180 days. <br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM.
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-csharp.md
For this quickstart you need:
- [Create a database and non-admin user](./how-to-create-users.md)
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
## Create a C# project At a command prompt, run:
namespace AzureMySqlExample
} ```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
-- ## Step 2: Read data Use the following code to connect and read the data by using a `SELECT` SQL statement. The code uses the `MySqlConnection` class with methods:
namespace AzureMySqlExample
} ```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
- ## Step 3: Update data Use the following code to connect and read the data by using an `UPDATE` SQL statement. The code uses the `MySqlConnection` class with method: - [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
namespace AzureMySqlExample
} } ```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
## Step 4: Delete data Use the following code to connect and delete the data by using a `DELETE` SQL statement.
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-php.md
if (mysqli_connect_errno())
die('Failed to connect to MySQL: '.mysqli_connect_error()); } ```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
## Step 2: Create a Table Use the following code to connect. This code calls:
az group delete \
> [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-python.md
Install Python and the MySQL connector for Python on your computer by using the
pip install mysql-connector-python ```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
- ## Get connection information Get the connection information you need to connect to Azure Database for MySQL from the Azure portal. You need the server name, database name, and login credentials.
else:
print("Done.") ```
-[Having issues? Let us know](https://aka.ms/mysql-doc-feedback)
- ## Step 2: Read data Use the following code to connect and read the data by using a **SELECT** SQL statement. The code imports the mysql.connector library, and uses [cursor.execute()](https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html) method executes the SQL query against the MySQL database.
az group delete \
> [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
network-function-manager Create Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/create-device.md
Title: 'Quickstart: Create a device resource for Azure Network Function Manager' description: In this quickstart, learn about how to create a device resource for Azure Network Function Manager.-+ Last updated 11/02/2021-+ # Quickstart: Create a Network Function Manager device resource
network-function-manager Deploy Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/deploy-functions.md
Title: 'Tutorial: Deploy network functions on Azure Stack Edge' description: In this tutorial, learn how to deploy a network function as a managed application.-+ Last updated 11/02/2021-+ # Tutorial: Deploy network functions on Azure Stack Edge
network-function-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/faq.md
Title: Network Function Manager FAQ description: Learn FAQs about Network Function Manager.-+ Last updated 11/02/2021-+ # Azure Network Function Manager FAQ
network-function-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/overview.md
Title: About Azure Network Function Manager description: Learn about Azure Network Function Manager, a fully managed cloud-native orchestration service that lets you deploy and provision network functions on Azure Stack Edge Pro with GPU for a consistent hybrid experience using the Azure portal.-+ Last updated 11/02/2021-+ # What is Azure Network Function Manager?
network-function-manager Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/partners.md
Title: Azure Network Function Manager partners description: Learn about partners offering their network functions for use with this service.-+ Last updated 11/02/2021-+ # Network Function Manager partners
network-function-manager Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/requirements.md
Title: Prerequisites and requirements for Azure Network Function Manager description: Learn about the requirements and prerequisites for Network Function Manager.-+ Last updated 11/02/2021-+
network-function-manager Resources Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/resources-permissions.md
Title: How to register resources description: Learn how to register resources and create user-assigned managed identities-+ Last updated 11/02/2021-+
payment-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/overview.md
# What is Azure Payment HSM?
-Azure Payment HSM Service is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. It meets the most stringent security, audit compliance, low latency, and high-performance requirements by the Payment Card Industry (PCI).
+Azure Payment HSM Service is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. It meets the most stringent security, audit compliance, low latency, and high-performance requirements by the Payment Card Industry (PCI).
Payment HSMs are provisioned and connected directly to users' virtual network, and HSMs are under users' sole administration control. HSMs can be easily provisioned as a pair of devices and configured for high availability. Users of the service utilize [Thales payShield Manager](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-manager) for secure remote access to the HSMs as part of their Azure-based subscription. Multiple subscription options are available to satisfy a broad range of performance and multiple application requirements that can be upgraded quickly in line with end-user business growth. Azure payment HSM service offers highest performance level 2500 CPS.
Azure Payment HSM a highly specialized service. Therefore, we recommend that you
## Why use Azure Payment HSM?
-Momentum is building as financial institutions move some or all of their payment applications to the cloud. This entails a migration from the legacy on-premises (on-prem) applications and HSMs to a cloud-based infrastructure that isn't generally under their direct control. Often it means a subscription service rather than perpetual ownership of physical equipment and software. Corporate initiatives for efficiency and a scaled-down physical presence are the drivers for this. Conversely, with cloud-native organizations, the adoption of cloud-first without any on-premise presence is their fundamental business model. Whatever the reason, end users of a cloud-based payment infrastructure expect reduced IT complexity, streamlined security compliance, and flexibility to scale their solution seamlessly as their business grows.
+Momentum is building as financial institutions move some or all of their payment applications to the cloud. This entails a migration from the legacy on-premises (on-prem) applications and HSMs to a cloud-based infrastructure that isn't generally under their direct control. Often it means a subscription service rather than perpetual ownership of physical equipment and software. Corporate initiatives for efficiency and a scaled-down physical presence are the drivers for this. Conversely, with cloud-native organizations, the adoption of cloud-first without any on-premises presence is their fundamental business model. Whatever the reason, end users of a cloud-based payment infrastructure expect reduced IT complexity, streamlined security compliance, and flexibility to scale their solution seamlessly as their business grows.
The cloud offers significant benefits, but challenges when migrating a legacy on-premise payment application (involving payment HSMs) to the cloud must be addressed. Some of these are:
Azure Payment HSM addresses these challenges and delivers a compelling value pro
### Enhanced security and compliance
-End users of the service can leverage Microsoft security and compliance investments to increase their security posture. Microsoft maintains PCI DSS and PCI 3DS compliant Azure data centers, including those which house Azure Payment HSM solutions. The Azure Payment HSM solution can be deployed as part of a validated PCI P2PE / PCI PIN component or solution, helping to simplify ongoing security audit compliance. Thales payShield 10K HSMs deployed in the security infrastructure are certified to FIPS 140-2 Level 3 and PCI HSM v3.
-
+End users of the service can leverage Microsoft security and compliance investments to increase their security posture. Microsoft maintains PCI DSS and PCI 3DS compliant Azure data centers, including those which house Azure Payment HSM solutions. The Azure Payment HSM solution can be deployed as part of a validated PCI P2PE / PCI PIN component or solution, helping to simplify ongoing security audit compliance. Thales payShield 10K HSMs deployed in the security infrastructure are certified to FIPS 140-2 Level 3 and PCI HSM v3.
+ ### Customer-managed HSM in Azure The Azure Payment HSM is a part of a subscription service that offers single-tenant HSMs for the service customer to have complete administrative control and exclusive access to the HSM. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM service. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released to ensure complete privacy and security is maintained. The customer is responsible for ensuring sufficient HSM subscriptions are active to meet their requirements for backup, disaster recovery, and resilience to achieve the same performance available on their on-premise HSMs. ### Accelerate digital transformation and innovation in cloud
-For existing Thales payShield customers wishing to add a cloud option, the Azure Payment HSM solution offers native access to a payment HSM in Azure for "lift and shift" while still experiencing the low latency they're accustomed to via their on-premise payShield HSMs. The solution also offers high-performance transactions for mission-critical payment applications. Consequently, customers can continue their digital transformation strategy by leveraging technology innovation in the cloud. Existing Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together with associated smart card readers and smart cards as appropriate) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their HSM as part of the subscription service.
+For existing Thales payShield customers wishing to add a cloud option, the Azure Payment HSM solution offers native access to a payment HSM in Azure for "lift and shift" while still experiencing the low latency they're accustomed to via their on-premise payShield HSMs. The solution also offers high-performance transactions for mission-critical payment applications. Consequently, customers can continue their digital transformation strategy by leveraging technology innovation in the cloud. Existing Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together with associated smart card readers and smart cards as appropriate) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their HSM as part of the subscription service.
## Typical use cases
Sensitive data protection
## Suitable for both existing and new payment HSM users
-The solution provides clear benefits for both Payment HSM users with a legacy on-premise HSM footprint and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.
+The solution provides clear benefits for both Payment HSM users with a legacy on-premise HSM footprint and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.
Benefits for existing on-premise HSM users - Requires no modifications to payment applications or HSM software to migrate existing applications to the Azure solution
Benefits for existing on-premise HSM users
Benefits for new payment participants - Avoids introduction of on-premise HSM infrastructure-- Lowers upfront investment via the Azure subscription model
+- Lowers upfront investment via the Azure subscription model
- Offers access to latest certified hardware and software on-demand ## Glossary
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-csharp.md
namespace Driver
} ```
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
- ## Step 2: Read data Use the following code to connect and read the data using a **SELECT** SQL statement. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.
namespace Driver
} ```
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
- ## Step 3: Update data Use the following code to connect and update the data using an **UPDATE** SQL statement. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.
namespace Driver
```
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
- ## Step 4: Delete data Use the following code to connect and delete data using a **DELETE** SQL statement.
az group delete \
> [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)
-[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-python.md
When the code runs successfully, it produces the following output:
:::image type="content" source="media/connect-python/2-example-python-output.png" alt-text="Command-line output":::
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
- ## Step 2: Read data The following code example connects to your Azure Database for PostgreSQL database and uses - [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **SELECT** statement to read data.
for row in rows:
```
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
## Step 3: Update data The following code example uses [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **UPDATE** statement to update data.
cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (200, "ban
print("Updated 1 row of data") ```
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
## Step 5: Delete data The following code example runs [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **DELETE** statement to delete an inventory item that you previously inserted.
print("Deleted 1 row of data")
```
-[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
- ## Clean up resources To clean up all resources used during this quickstart, delete the resource group using the following command:
az group delete \
> [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)<br/>
-[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-single-server.md
In this article, we will provide an overview and introduction to core concepts o
## Overview
-Single Server is a fully managed database service with minimal requirements for customizations of the database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized to provide 99.99% availability on single availability zone. It supports community version of PostgreSQL 9.6, 10, and 11. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+Single Server is a fully managed database service with minimal requirements for customizations of the database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized to provide 99.99% availability on single availability zone. It supports community version of PostgreSQL 10 and 11. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom PostgreSQL configuration settings.
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Installation of the self-hosted integration runtime on a domain controller isn't
> Any requirements will be listed in the **Prerequisites** section. - Self-hosted integration runtime requires a 64-bit Operating System with .NET Framework 4.7.2 or above. See [.NET Framework System Requirements](/dotnet/framework/get-started/system-requirements) for details.+
+- Ensure Visual C++ Redistributable for Visual Studio 2015 or higher is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://docs.microsoft.com/cpp/windows/latest-supported-vc-redist#visual-studio-2015-2017-2019-and-2022).
+ - The recommended minimum configuration for the self-hosted integration runtime machine is a 2-GHz processor with 4 cores, 8 GB of RAM, and 80 GB of available hard drive space. For the details of system requirements, see [Download](https://www.microsoft.com/download/details.aspx?id=39717). - If the host machine hibernates, the self-hosted integration runtime doesn't respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installer prompts with a message. - You must be an administrator on the machine to successfully install and configure the self-hosted integration runtime.
remote-rendering Convert Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/quickstarts/convert-model.md
Fill out the form in the following manner:
* Create a new Resource Group from the link below the drop-down box and name this **ARR_Tutorial** * For the **Storage account name**, enter a unique name here. **This name must be globally unique**, otherwise there will be a prompt that informs you that the name is already taken. In the scope of this quickstart, we name it **arrtutorialstorage**. Accordingly, you need to replace it with your name for any occurrence in this quickstart. * Select a **location** close to you. Ideally use the same location as used for setting up the rendering in the other quickstart.
-* **Performance** set to 'Standard'
+* **Performance** set to 'Premium'. 'Standard' works as well, but has lower loading time characteristics when a model is loaded by the runtime.
* **Account kind** set to 'StorageV2 (general purpose v2)' * **Replication** set to 'Read-access geo-redundant storage (RA-GRS)' * **Access tier** set to 'Hot'
search Cognitive Search Custom Skill Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-interface.md
When you create a Web API enricher, you can describe HTTP headers and parameters
"skills": [ { "@odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
- "name": "myCustomSkill"
+ "name": "myCustomSkill",
"description": "This skill calls an Azure function, which in turn calls TA sentiment", "uri": "https://indexer-e2e-webskill.azurewebsites.net/api/DateExtractor?language=en", "context": "/document",
This article covered the interface requirements necessary for integrating a cust
+ [Example: Creating a custom skill for AI enrichment](cognitive-search-create-custom-skill-example.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [Create Skillset (REST)](/rest/api/searchservice/create-skillset)
-+ [How to map enriched fields](cognitive-search-output-field-mapping.md)
++ [How to map enriched fields](cognitive-search-output-field-mapping.md)
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
You can also connect to threat intelligence sources from playbooks, in order to
To connect to TAXII threat intelligence feeds, follow the instructions to [connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md), together with the data supplied by each vendor linked below. You may need to contact the vendor directly to obtain the necessary data to use with the connector.
+### Accenture Cyber Threat Intelligence
+
+- [Learn about Accenture CTI integration with Microsoft Sentinel](https://www.accenture.com/us-en/services/security/cyber-defense).
+ ### Anomali Limo - [See what you need to connect to Anomali Limo feed](https://www.anomali.com/resources/limo).
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
Previously updated : 03/02/2022 Last updated : 06/13/2022 # Integrate Azure App Configuration with Service Connector
This page shows the supported authentication types and client types of Azure App
## Supported compute services - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud ## Supported authentication types and client types
service-connector How To Integrate Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-confluent-kafka.md
Previously updated : 05/03/2022 Last updated : 06/13/2022
-# Integrate Apache kafka on Confluent Cloud with Service Connector
+# Integrate Apache Kafka on Confluent Cloud with Service Connector
This page shows the supported authentication types and client types of Apache kafka on Confluent Cloud with Service using Service Connector. You might still be able to connect to Apache kafka on Confluent Cloud in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
-## Supported compute service
+## Supported compute services
- Azure App Service
+- Azure Container Apps
- Azure Spring Cloud ## Supported Authentication types and client types
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
-| | | | | |
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Cosmos DB with Service Connector
This page shows the supported authentication types and client types of Azure Cos
## Supported compute service - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud
-## Supported Authentication types and client types
-
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
-| | | | | |
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+## Supported authentication types and client types
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
## Default environment variable names or application properties
-### Dotnet, Java, Nodejs, and Go
-
-**Secret/ConnectionString**
+### Secret / Connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_COSMOS_CONNECTIONSTRING | Mango DB in Cosmos DB connection string | `mongodb://{mango-db-admin-user}:{********}@{mango-db-server}.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@{mango-db-server}@` |
-**System-assigned Managed Identity**
+### System-assigned managed identity
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Cos
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint| `https://{your-database-server}.documents.azure.com:443/` |
-**User-assigned Managed Identity**
+### User-assigned managed identity
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Cos
| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `{your-subscription-id}` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint| `https://{your-database-server}.documents.azure.com:443/` |
-**Service Principal**
+### Service principal
| Default environment variable name | Description | Example value | | | | |
service-connector How To Integrate Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Event Hubs with Service Connector
This page shows the supported authentication types and client types of Azure Eve
## Supported compute services - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud ## Supported authentication types and client types
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Key Vault with Service Connector
This page shows the supported authentication types and client types of Azure Key
## Supported compute service - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud
-## Supported Authentication types and client types
+## Supported authentication types and client types
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
| | | | | | | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
This page shows the supported authentication types and client types of Azure Key
### .NET, Java, Node.JS, Python
-**System-assigned Managed Identity**
+#### System-assigned managed identity
| Default environment variable name | Description | Example value | | | | | | AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` | | AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://{yourKeyVault}.vault.azure.net/` |
-**User-assigned Managed Identity**
+#### User-assigned managed identity
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Key
| AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://{yourKeyVault}.vault.azure.net/` | | AZURE_KEYVAULT_CLIENTID | Your Client ID | `{yourClientID}` |
-**Service Principal**
+#### Service principal
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Key
### Java - Spring Boot
-**Service Principal**
+#### Java - Spring Boot service principal
| Default environment variable name | Description | Example value | | | | |
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Database for MySQL with Service Connector
This page shows the supported authentication types and client types of Azure Dat
## Supported compute service - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud
-## Supported Authentication types and client types
+## Supported authentication types and client types
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
-| | | | | |
-| .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
-| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (mysql) | | | ![yes icon](./media/green-check.png) | |
-| Python (mysql-connector-python) | | | ![yes icon](./media/green-check.png) | |
-| Python-Django | | | ![yes icon](./media/green-check.png) | |
-| Go (go-sql-driver for mysql) | | | ![yes icon](./media/green-check.png) | |
-| PHP (mysqli) | | | ![yes icon](./media/green-check.png) | |
-| Ruby (mysql2) | | | ![yes icon](./media/green-check.png) | |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||-|--|--|-|
+| .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
+| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Node.js (mysql) | | | ![yes icon](./media/green-check.png) | |
+| Python (mysql-connector-python) | | | ![yes icon](./media/green-check.png) | |
+| Python-Django | | | ![yes icon](./media/green-check.png) | |
+| Go (go-sql-driver for mysql) | | | ![yes icon](./media/green-check.png) | |
+| PHP (mysqli) | | | ![yes icon](./media/green-check.png) | |
+| Ruby (mysql2) | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
-### .NET (MySqlConnector)
-
-**Secret/ConnectionString**
+### .NET (MySqlConnector) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_MYSQL_CONNECTIONSTRING | ADO.NET MySQL connection string | `Server={MySQLName}.mysql.database.azure.com;Database={MySQLDbName};Port=3306;SSL Mode=Required;User Id={MySQLUsername};Password={TestDbPassword}` |
-### Java (JDBC)
-
-**Secret/ConnectionString**
+### Java (JDBC) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_MYSQL_CONNECTIONSTRING | JDBC MySQL connection string | `jdbc:mysql://{MySQLName}.mysql.database.azure.com:3306/{MySQLDbName}?sslmode=required&user={MySQLUsername}&password={Uri.EscapeDataString(TestDbPassword)}` |
-### Java - Spring Boot (JDBC)
-
-**Secret/ConnectionString**
+### Java - Spring Boot (JDBC) secret / connection string
| Application properties | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Dat
| spring.datatsource.username | Database username | `{MySQLUsername}@{MySQLName}` | | spring.datatsource.password | Database password | `****` |
-### Node.js (mysql)
-
-**Secret/ConnectionString**
+### Node.js (mysql) secret / connection string
| Default environment variable name | Description | Example value | ||||
This page shows the supported authentication types and client types of Azure Dat
| AZURE_MYSQL_PORT | Port number | `3306` | | AZURE_MYSQL_SSL | SSL option | `true` |
-### Python (mysql-connector-python)
-
-**Secret/ConnectionString**
+### Python (mysql-connector-python) secret / connection string
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Dat
| AZURE_MYSQL_PASSWORD | Database password | `****` | | AZURE_MYSQL_USER | Database Username | `{MySQLUsername}@{MySQLName}` |
-### Python-Django
-
-**Secret/ConnectionString**
+### Python-Django secret / connection string
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Dat
| AZURE_MYSQL_PASSWORD | Database password | `****` | | AZURE_MYSQL_NAME | Database name | `MySQLDbName` | -
-### Go (go-sql-driver for mysql)
-
-**Secret/ConnectionString**
+### Go (go-sql-driver for mysql) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_MYSQL_CONNECTIONSTRING | Go-sql-driver connection string | `{MySQLUsername}@{MySQLName}:{Password}@tcp({ServerHost}:{Port})/{Database}?tls=true` | -
-### PHP (mysqli)
-
-**Secret/ConnectionString**
+### PHP (mysqli) secret / connection string
| Default environment variable name | Description | Example value | ||||
This page shows the supported authentication types and client types of Azure Dat
| AZURE_MYSQL_PORT | Port number | `3306` | | AZURE_MYSQL_FLAG | SSL or other flags | `MYSQLI_CLIENT_SSL` |
-### Ruby (mysql2)
-
-**Secret/ConnectionString**
+### Ruby (mysql2) secret / connection string
| Default environment variable name | Description | Example value | ||||
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Database for PostgreSQL with Service Connector
This page shows the supported authentication types and client types of Azure Dat
## Supported compute service - Azure App Service
+- Azure App Configuration
- Azure Spring Cloud
-## Supported Authentication types and client types
+## Supported authentication types and client types
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
| | | | | | | .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | | | Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
This page shows the supported authentication types and client types of Azure Dat
## Default environment variable names or application properties
-### .NET (ADO.NET)
-
-**Secret/ConnectionString**
+### .NET (ADO.NET) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_POSTGRESQL_CONNECTIONSTRING | ADO.NET PostgreSQL connection string | `Server={your-postgres-server-name}.postgres.database.azure.com;Database={database-name};Port=5432;Ssl Mode=Require;User Id={username}@{servername};Password=****;` |
-### Java (JDBC)
-
-**Secret/ConnectionString**
+### Java (JDBC) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_POSTGRESQL_CONNECTIONSTRING | JDBC PostgreSQL connection string | `jdbc:postgresql://{your-postgres-server-name}.postgres.database.azure.com:5432/{database-name}?sslmode=require&user={username}%40{servername}l&password=****` |
-### Java - Spring Boot (JDBC)
-
-**Secret/ConnectionString**
+### Java - Spring Boot (JDBC) secret / connection string
| Application properties | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Dat
| spring.datatsource.username | Database username | `{username}@{servername}` | | spring.datatsource.password | Database password | `****` |
-### Node.js (pg)
-
-**Secret/ConnectionString**
+### Node.js (pg) secret / connection string
| Default environment variable name | Description | Example value | ||||
This page shows the supported authentication types and client types of Azure Dat
| AZURE_POSTGRESQL_PORT | Port number | `5432` | | AZURE_POSTGRESQL_SSL | SSL option | `true` |
-### Python (psycopg2)
-
-**Secret/ConnectionString**
+### Python (psycopg2) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_POSTGRESQL_CONNECTIONSTRING | psycopg2 connection string | `dbname={database-name} host={your-postgres-server-name}.postgres.database.azure.com port=5432 sslmode=require user={username}@{servername} password=****` |
-### Python-Django
-
-**Secret/ConnectionString**
+### Python-Django secret / connection string
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Dat
| AZURE_POSTGRESQL_PASSWORD | Database password | `****` | | AZURE_POSTGRESQL_NAME | Database name | `{database-name}` | -
-### Go (pg)
-
-**Secret/ConnectionString**
+### Go (pg) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_POSTGRESQL_CONNECTIONSTRING | Go pg connection string | `host={your-postgres-server-name}.postgres.database.azure.com dbname={database-name} sslmode=require user={username}@{servername} password=****` | -
-### PHP (native)
-
-**Secret/ConnectionString**
+### PHP (native) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_POSTGRESQL_CONNECTIONSTRING | PHP native postgres connection string | `host={your-postgres-server-name}.postgres.database.azure.com port=5432 dbname={database-name} sslmode=requrie user={username}@{servername} password=****` |
-### Ruby (ruby-pg)
-
-**Secret/ConnectionString**
+### Ruby (ruby-pg) secret / connection string
| Default environment variable name | Description | Example value | | | | |
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Cache for Redis with Service Connector
This page shows the supported authentication types and client types of Azure Cac
## Supported compute service - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud ## Supported Authentication types and client types
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
-| | | | | |
-| .NET (StackExchange.Redis) | | | ![yes icon](./media/green-check.png) | |
-| Java (Jedis) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (node-redis) | | | ![yes icon](./media/green-check.png) | |
-| Python (redis-py) | | | ![yes icon](./media/green-check.png) | |
-| Go (go-redis) | | | ![yes icon](./media/green-check.png) | |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| .NET (StackExchange.Redis) | | | ![yes icon](./media/green-check.png) | |
+| Java (Jedis) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | |
+| Node.js (node-redis) | | | ![yes icon](./media/green-check.png) | |
+| Python (redis-py) | | | ![yes icon](./media/green-check.png) | |
+| Go (go-redis) | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
-### .NET (StackExchange.Redis)
-
-**Secret/ConnectionString**
+### .NET (StackExchange.Redis) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_REDIS_CONNECTIONSTRING | StackExchange.Redis connection string | `{redis-server}.redis.cache.windows.net:6380,password={redis-key},ssl=True,defaultDatabase=0` |
-### Java (Jedis)
-
-**Secret/ConnectionString**
+### Java (Jedis) secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_REDIS_CONNECTIONSTRING | Jedis connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
-### Java - Spring Boot (spring-boot-starter-data-redis)
-
-**Secret/ConnectionString**
+### Java - Spring Boot (spring-boot-starter-data-redis) secret / connection string
| Application properties | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Cac
| spring.redis.password | Redis key | `{redis-key}` | | spring.redis.ssl | SSL setting | `true` |
-### Node.js (node-redis)
-
-**Secret/ConnectionString**
+### Node.js (node-redis) secret / connection string
| Default environment variable name | Description | Example value | |||| | AZURE_REDIS_CONNECTIONSTRING | node-redis connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` | -
-### Python (redis-py)
-
-**Secret/ConnectionString**
+### Python (redis-py) secret / connection string
| Default environment variable name | Description | Example value | |||| | AZURE_REDIS_CONNECTIONSTRING | redis-py connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
-### Go (go-redis)
-
-**Secret/ConnectionString**
+### Go (go-redis) secret / connection string
| Default environment variable name | Description | Example value | ||||
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Service Bus with Service Connector
This page shows the supported authentication types and client types of Azure Ser
## Supported compute services - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud ## Supported authentication types and client types
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md
Previously updated : 5/25/2022 Last updated : 6/13/2022 - ignite-fall-2021 - kr2b-contr-experiment
This article shows the supported authentication types and client types of Azure
## Supported compute service - Azure App Service
+- Azure Container Apps
## Supported authentication types and client types
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
-| | | | | |
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|-|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
## Default environment variable names or application properties
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
Previously updated : 06/02/2022 Last updated : 06/13/2022 # Integrate Azure SQL Database with Service Connector
This page shows all the supported compute services, clients, and authentication
## Supported compute services - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud ## Supported authentication types and clients
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Blob Storage with Service Connector
This page shows the supported authentication types and client types of Azure Blo
## Supported compute service - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud
-## Supported Authentication types and client types
-
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
-| | | | | |
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+## Supported authentication types and client types
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
## Default environment variable names or application properties ### .NET, Java, Node.JS, Python
-**Secret/ConnectionString**
+#### Secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob storage connection string | `DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={****};EndpointSuffix=core.windows.net` |
-**System-assigned Managed Identity**
+#### system-assigned managed identity
| Default environment variable name | Description | Example value | | | | | | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob storage endpoint | `https://{storageAccountName}.blob.core.windows.net/` |
-**User-assigned Managed Identity**
+#### User-assigned managed identity
| Default environment variable name | Description | Example value | | | | | | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob storage endpoint | `https://{storageAccountName}.blob.core.windows.net/` | | AZURE_STORAGEBLOB_CLIENTID | Your client ID | `{yourClientID}` |
-**Service Principal**
+#### Service principal
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Blo
### Java - Spring Boot
-**Secret/ConnectionString**
+#### Java - Spring Boot secret / connection string
| Application properties | Description | Example value | | | | |
service-connector How To Integrate Storage File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-file.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure File Storage with Service Connector
This page shows the supported authentication types and client types of Azure Fil
## Supported compute service - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud
-## Supported Authentication types and client types
-
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
-| | | | | |
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
-| PHP | | | ![yes icon](./media/green-check.png) | |
-| Ruby | | | ![yes icon](./media/green-check.png) | |
-
+## Supported authentication types and client types
+| Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
+| PHP | | | ![yes icon](./media/green-check.png) | |
+| Ruby | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
-### .NET, Java, Node.JS, Python, PHP and Ruby
-
-**Secret/ConnectionString**
+### .NET, Java, Node.JS, Python, PHP and Ruby secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_STORAGEFILE_CONNECTIONSTRING | File storage connection string | `DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={****};EndpointSuffix=core.windows.net` | -
-### Java - Spring Boot
-
-**Secret/ConnectionString**
+### Java - Spring Boot secret / connection string
| Application properties | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Fil
| azure.storage.account-key | File storage account key | `{yourSecret}` | | azure.storage.file-endpoint | File storage endpoint | `https://{storageAccountName}.file.core.windows.net/` | - ## Next steps Follow the tutorials listed below to learn more about Service Connector.
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Queue Storage with Service Connector
This page shows the supported authentication types and client types of Azure Que
## Supported compute service - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud
-## Supported Authentication types and client types
+## Supported authentication types and client types
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
| | | | | | | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
This page shows the supported authentication types and client types of Azure Que
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | - ## Default environment variable names or application properties ### .NET, Java, Node.JS, Python
-**Secret/ConnectionString**
+#### Secret/ connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_STORAGEQUEUE_CONNECTIONSTRING | Queue storage connection string | `DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={****};EndpointSuffix=core.windows.net` |
-**System-assigned Managed Identity**
+#### System-assigned managed identity
| Default environment variable name | Description | Example value | | | | | | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://{StorageAccountName}.queue.core.windows.net/` |
-**User-assigned Managed Identity**
+#### User-assigned managed identity
| Default environment variable name | Description | Example value | | | | | | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://{storageAccountName}.queue.core.windows.net/` | | AZURE_STORAGEQUEUE_CLIENTID | Your client ID | `{yourClientID}` |
-**Service Principal**
+#### Service principal
| Default environment variable name | Description | Example value | | | | |
This page shows the supported authentication types and client types of Azure Que
### Java - Spring Boot
-**Secret/ConnectionString**
+#### Java - Spring Boot secret / connection string
| Application properties | Description | Example value | | | | |
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
Previously updated : 05/03/2022 Last updated : 06/13/2022 # Integrate Azure Table Storage with Service Connector
This page shows the supported authentication types and client types of Azure Tab
## Supported compute service - Azure App Service
+- Azure Container Apps
- Azure Spring Cloud
-## Supported Authentication types and client types
-
-| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
-| | | | | |
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
+## Supported authentication types and client types
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|-|-|--|--|-|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
-### .NET, Java, Node.JS and Python
-
-**Secret/ConnectionString**
+### .NET, Java, Node.JS and Python secret / connection string
| Default environment variable name | Description | Example value | | | | | | AZURE_STORAGETABLE_CONNECTIONSTRING | Table storage connection string | `DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={****};EndpointSuffix=core.windows.net` | - ## Next steps Follow the tutorials listed below to learn more about Service Connector.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Site Recovery allows you to perform global disaster recovery. You can repl
America | Canada East, Canada Central, South Central US, West Central US, East US, East US 2, West US, West US 2, West US 3, Central US, North Central US Europe | UK West, UK South, North Europe, West Europe, South Africa West, South Africa North, Norway East, France Central, Switzerland North, Germany West Central, UAE North (UAE is treated as part of the Europe geo cluster) Asia | South India, Central India, West India, Southeast Asia, East Asia, Japan East, Japan West, Korea Central, Korea South
-JIO | JIO India West
+JIO | JIO India West<br/><br/>Replication cannot be done between JIO and non-JIO regions for Virtual Machines present in JIO subscriptions. This is because JIO subscriptions can have resources only in JIO regions.
Australia | Australia East, Australia Southeast, Australia Central, Australia Central 2 Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas, US DOD East, US DOD Central Germany | Germany Central, Germany Northeast
site-recovery Deploy Vmware Azure Replication Appliance Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-preview.md
Last updated 09/01/2021
> Ensure you create a new and exclusive Recovery Services vault for setting up the preview appliance. Don't use an existing vault. >[!NOTE]
-> Enabling replication for physical machines is not supported with this preview.
+> Enabling replication for physical machines is not supported with this preview.
You deploy an on-premises replication appliance when you use [Azure Site Recovery](site-recovery-overview.md) for disaster recovery of VMware VMs to Azure.
Exclude following folders from Antivirus software for smooth replication and to
C:\ProgramData\Microsoft Azure <br> C:\ProgramData\ASRLogs <br>
-C:\Windows\Temp\MicrosoftAzure
+C:\Windows\Temp\MicrosoftAzure
C:\Program Files\Microsoft Azure Appliance Auto Update <br> C:\Program Files\Microsoft Azure Appliance Configuration Manager <br> C:\Program Files\Microsoft Azure Push Install Agent <br>
C:\Program Files\Microsoft Azure Recovery Services Agent <br>
C:\Program Files\Microsoft Azure Server Discovery Service <br> C:\Program Files\Microsoft Azure Site Recovery Process Server <br> C:\Program Files\Microsoft Azure Site Recovery Provider <br>
-C:\Program Files\Microsoft Azure to On-Premise Reprotect agent <br>
+C:\Program Files\Microsoft Azure to On-Premises Reprotect agent <br>
C:\Program Files\Microsoft Azure VMware Discovery Service <br> C:\Program Files\Microsoft On-Premise to Azure Replication agent <br> E:\ <br>
The OVF template spins up a machine with the required specifications.
### Set up the appliance through PowerShell >[!NOTE]
-> Enabling replication for physical machines is not supported with this preview.
+> Enabling replication for physical machines is not supported with this preview.
In case of any organizational restrictions, you can manually set up the Site Recovery replication appliance through PowerShell. Follow these steps:
In case of any organizational restrictions, you can manually set up the Site Rec
2. After successfully copying the zip folder, unzip and extract the components of the folder. 3. Go to the path in which the folder is extracted to and execute the following PowerShell script as an administrator:
- **DRInstaller.ps1**
+ **DRInstaller.ps1**
## Register appliance Once you create the appliance, Microsoft Azure appliance configuration manager is launched automatically. Prerequisites such as internet connectivity, Time sync, system configurations and group policies (listed below) are validated.
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-support-matrix.md
Hyper-V (running without Virtual Machine Manager) | Windows Server 2022 (Server
Hyper-V (running with Virtual Machine Manager) | Virtual Machine Manager 2022 (Server core not supported), Virtual Machine Manager 2019, Virtual Machine Manager 2016, Virtual Machine Manager 2012 R2 <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If Virtual Machine Manager is used, Windows Server 2019 hosts should be managed in Virtual Machine Manager 2019. Similarly, Windows Server 2016 hosts should be managed in Virtual Machine Manager 2016. > [!NOTE]
-> Ensure that .NET Framework 4.6.2 or higher is present on the on-premise server.
+> Ensure that .NET Framework 4.6.2 or higher is present on the on-premises server.
## Replicated VMs
UEFI Secure boot | No | No
| | Availability sets | Yes | Yes Availability zones | No | No
-HUB | Yes | Yes
+HUB | Yes | Yes
Managed disks | Yes, for failover.<br/><br/> Failback of managed disks isn't supported. | Yes, for failover.<br/><br/> Failback of managed disks isn't supported. ## Azure VM requirements
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md
# Best practices for using Azure Data Lake Storage Gen2
-This article provides best practice guidelines that help you optimize performance, reduce costs, and secure your Data Lake Storage Gen2 enabled Azure Storage account.
+This article provides best practice guidelines that help you optimize performance, reduce costs, and secure your Data Lake Storage Gen2 enabled Azure Storage account.
For general suggestions around structuring a data lake, see these articles:
For general suggestions around structuring a data lake, see these articles:
## Find documentation
-Azure Data Lake Storage Gen2 is not a dedicated service or account type. It's a set of capabilities that support high throughput analytic workloads. The Data Lake Storage Gen2 documentation provides best practices and guidance for using these capabilities. Refer to the [Blob storage documentation](storage-blobs-introduction.md) content, for all other aspects of account management such as setting up network security, designing for high availability, and disaster recovery.
+Azure Data Lake Storage Gen2 is not a dedicated service or account type. It's a set of capabilities that support high throughput analytic workloads. The Data Lake Storage Gen2 documentation provides best practices and guidance for using these capabilities. Refer to the [Blob storage documentation](storage-blobs-introduction.md) content, for all other aspects of account management such as setting up network security, designing for high availability, and disaster recovery.
#### Evaluate feature support and known issues
Use the following pattern as you configure your account to use Blob storage feat
2. Review the [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article to see if there are any limitations or special guidance around the feature you intend to use.
-3. Scan feature articles for any guidance that is specific to Data Lake Storage Gen2 enabled accounts.
+3. Scan feature articles for any guidance that is specific to Data Lake Storage Gen2 enabled accounts.
#### Understand the terms used in documentation
If your storage account is going to be used for analytics, we highly recommend t
## Optimize for data ingest
-When ingesting data from a source system, the source hardware, source network hardware, or the network connectivity to your storage account can be a bottleneck.
+When ingesting data from a source system, the source hardware, source network hardware, or the network connectivity to your storage account can be a bottleneck.
![Diagram that shows the factors to consider when ingesting data from a source system to Data Lake Storage Gen2.](./media/data-lake-storage-best-practices/bottleneck.png) ### Source hardware
-Whether you are using on-premise machines or Virtual Machines (VMs) in Azure, make sure to carefully select the appropriate hardware. For disk hardware, consider using Solid State Drives (SSD) and pick disk hardware that has faster spindles. For network hardware, use the fastest Network Interface Controllers (NIC) as possible. On Azure, we recommend Azure D14 VMs, which have the appropriately powerful disk and networking hardware.
+Whether you are using on-premises machines or Virtual Machines (VMs) in Azure, make sure to carefully select the appropriate hardware. For disk hardware, consider using Solid State Drives (SSD) and pick disk hardware that has faster spindles. For network hardware, use the fastest Network Interface Controllers (NIC) as possible. On Azure, we recommend Azure D14 VMs, which have the appropriately powerful disk and networking hardware.
### Network connectivity to the storage account
To achieve the best performance, use all available throughput by performing as m
![Data Lake Storage Gen2 performance](./media/data-lake-storage-best-practices/throughput.png)
-The following table summarizes the key settings for several popular ingestion tools.
+The following table summarizes the key settings for several popular ingestion tools.
-| Tool | Settings |
+| Tool | Settings |
|--|| | [DistCp](data-lake-storage-use-distcp.md#performance-considerations-while-using-distcp) | -m (mapper) |
-| [Azure Data Factory](../../data-factory/copy-activity-performance.md) | parallelCopies |
-| [Sqoop](/archive/blogs/shanyu/performance-tuning-for-hdinsight-storm-and-microsoft-azure-eventhubs) | fs.azure.block.size, -m (mapper) |
+| [Azure Data Factory](../../data-factory/copy-activity-performance.md) | parallelCopies |
+| [Sqoop](/archive/blogs/shanyu/performance-tuning-for-hdinsight-storm-and-microsoft-azure-eventhubs) | fs.azure.block.size, -m (mapper) |
> [!NOTE] > The overall performance of your ingest operations depend on other factors that are specific to the tool that you're using to ingest data. For the best up-to-date guidance, see the documentation for each tool that you intend to use.
Your account can scale to provide the necessary throughput for all analytics sce
## Structure data sets
-Consider pre-planning the structure of your data. File format, file size, and directory structure can all impact performance and cost.
+Consider pre-planning the structure of your data. File format, file size, and directory structure can all impact performance and cost.
### File formats
-Data can be ingested in various formats. Data can be appear in human readable formats such as JSON, CSV, or XML or as compressed binary formats such as `.tar.gz`. Data can come in various sizes as well. Data can be composed of large files (a few terabytes) such as data from an export of a SQL table from your on-premise systems. Data can also come in the form of a large number of tiny files (a few kilobytes) such as data from real-time events from an Internet of things (IoT) solution. You can optimize efficiency and costs by choosing an appropriate file format and file size.
+Data can be ingested in various formats. Data can be appear in human readable formats such as JSON, CSV, or XML or as compressed binary formats such as `.tar.gz`. Data can come in various sizes as well. Data can be composed of large files (a few terabytes) such as data from an export of a SQL table from your on-premise systems. Data can also come in the form of a large number of tiny files (a few kilobytes) such as data from real-time events from an Internet of things (IoT) solution. You can optimize efficiency and costs by choosing an appropriate file format and file size.
Hadoop supports a set of file formats that are optimized for storing and processing structured data. Some common formats are Avro, Parquet, and Optimized Row Columnar (ORC) format. All of these formats are machine-readable binary file formats. They are compressed to help you manage file size. They have a schema embedded in each file, which makes them self-describing. The difference between these formats is in how data is stored. Avro stores data in a row-based format and the Parquet and ORC formats store data in a columnar format.
Apache Parquet is an open source file format that is optimized for read heavy an
### File size
-Larger files lead to better performance and reduced costs.
+Larger files lead to better performance and reduced costs.
-Typically, analytics engines such as HDInsight have a per-file overhead that involves tasks such as listing, checking access, and performing various metadata operations. If you store your data as many small files, this can negatively affect performance. In general, organize your data into larger sized files for better performance (256 MB to 100 GB in size). Some engines and applications might have trouble efficiently processing files that are greater than 100 GB in size.
+Typically, analytics engines such as HDInsight have a per-file overhead that involves tasks such as listing, checking access, and performing various metadata operations. If you store your data as many small files, this can negatively affect performance. In general, organize your data into larger sized files for better performance (256 MB to 100 GB in size). Some engines and applications might have trouble efficiently processing files that are greater than 100 GB in size.
Increasing file size can also reduce transaction costs. Read and write operations are billed in 4 megabyte increments so you're charged for operation whether or not the file contains 4 megabytes or only a few kilobytes. For pricing information, see [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).
-Sometimes, data pipelines have limited control over the raw data, which has lots of small files. In general, we recommend that your system have some sort of process to aggregate small files into larger ones for use by downstream applications. If you're processing data in real time, you can use a real time streaming engine (such as [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) or [Spark Streaming](https://databricks.com/glossary/what-is-spark-streaming)) together with a message broker (such as [Event Hub](../../event-hubs/event-hubs-about.md) or [Apache Kafka](https://kafka.apache.org/)) to store your data as larger files. As you aggregate small files into larger ones, consider saving them in a read-optimized format such as [Apache Parquet](https://parquet.apache.org/) for downstream processing.
+Sometimes, data pipelines have limited control over the raw data, which has lots of small files. In general, we recommend that your system have some sort of process to aggregate small files into larger ones for use by downstream applications. If you're processing data in real time, you can use a real time streaming engine (such as [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) or [Spark Streaming](https://databricks.com/glossary/what-is-spark-streaming)) together with a message broker (such as [Event Hub](../../event-hubs/event-hubs-about.md) or [Apache Kafka](https://kafka.apache.org/)) to store your data as larger files. As you aggregate small files into larger ones, consider saving them in a read-optimized format such as [Apache Parquet](https://parquet.apache.org/) for downstream processing.
### Directory structure
In the common case of batch data being processed directly into databases such as
#### Time series data structure
-For Hive workloads, partition pruning of time-series data can help some queries read only a subset of the data, which improves performance.
+For Hive workloads, partition pruning of time-series data can help some queries read only a subset of the data, which improves performance.
Those pipelines that ingest time-series data, often place their files with a structured naming for files and folders. Below is a common example we see for data that is structured by date:
Again, the choice you make with the folder and file organization should optimize
## Set up security
-Start by reviewing the recommendations in the [Security recommendations for Blob storage](security-recommendations.md) article. You'll find best practice guidance about how to protect your data from accidental or malicious deletion, secure data behind a firewall, and use Azure Active Directory (Azure AD) as the basis of identity management.
+Start by reviewing the recommendations in the [Security recommendations for Blob storage](security-recommendations.md) article. You'll find best practice guidance about how to protect your data from accidental or malicious deletion, secure data behind a firewall, and use Azure Active Directory (Azure AD) as the basis of identity management.
-Then, review the [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md) article for guidance that is specific to Data Lake Storage Gen2 enabled accounts. This article helps you understand how to use Azure role-based access control (Azure RBAC) roles together with access control lists (ACLs) to enforce security permissions on directories and files in your hierarchical file system.
+Then, review the [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md) article for guidance that is specific to Data Lake Storage Gen2 enabled accounts. This article helps you understand how to use Azure role-based access control (Azure RBAC) roles together with access control lists (ACLs) to enforce security permissions on directories and files in your hierarchical file system.
## Ingest, process, and analyze
-There are many different sources of data and different ways in which that data can be ingested into a Data Lake Storage Gen2 enabled account.
+There are many different sources of data and different ways in which that data can be ingested into a Data Lake Storage Gen2 enabled account.
-For example, you can ingest large sets of data from HDInsight and Hadoop clusters or smaller sets of *ad hoc* data for prototyping applications. You can ingest streamed data that is generated by various sources such as applications, devices, and sensors. For this type of data, you can use tools to capture and process the data on an event-by-event basis in real time, and then write the events in batches into your account. You can also ingest web server which contain information such as the history of page requests. For log data, consider writing custom scripts or applications to upload them so that you'll have the flexibility to include your data uploading component as part of your larger big data application.
+For example, you can ingest large sets of data from HDInsight and Hadoop clusters or smaller sets of *ad hoc* data for prototyping applications. You can ingest streamed data that is generated by various sources such as applications, devices, and sensors. For this type of data, you can use tools to capture and process the data on an event-by-event basis in real time, and then write the events in batches into your account. You can also ingest web server which contain information such as the history of page requests. For log data, consider writing custom scripts or applications to upload them so that you'll have the flexibility to include your data uploading component as part of your larger big data application.
-Once the data is available in your account, you can run analysis on that data, create visualizations, and even download data to your local machine or to other repositories such as an Azure SQL database or SQL Server instance.
+Once the data is available in your account, you can run analysis on that data, create visualizations, and even download data to your local machine or to other repositories such as an Azure SQL database or SQL Server instance.
-The following table recommends tools that you can use to ingest, analyze, visualize, and download data. Use the links in this table to find guidance about how to configure and use each tool.
+The following table recommends tools that you can use to ingest, analyze, visualize, and download data. Use the links in this table to find guidance about how to configure and use each tool.
| Purpose | Tools & Tool guidance | |||
The following table recommends tools that you can use to ingest, analyze, visual
| Download data | Azure portal, [PowerShell](data-lake-storage-directory-file-acl-powershell.md), [Azure CLI](data-lake-storage-directory-file-acl-cli.md), [REST](/rest/api/storageservices/data-lake-storage-gen2), Azure SDKs ([.NET](data-lake-storage-directory-file-acl-dotnet.md), [Java](data-lake-storage-directory-file-acl-java.md), [Python](data-lake-storage-directory-file-acl-python.md), and [Node.js](data-lake-storage-directory-file-acl-javascript.md)), [Azure Storage Explorer](data-lake-storage-explorer.md), [AzCopy](../common/storage-use-azcopy-v10.md#transfer-data), [Azure Data Factory](../../data-factory/copy-activity-overview.md), [Apache DistCp](./data-lake-storage-use-distcp.md) | > [!NOTE]
-> This table doesn't reflect the complete list of Azure services that support Data Lake Storage Gen2. To see a list of supported Azure services, their level of support, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md).
+> This table doesn't reflect the complete list of Azure services that support Data Lake Storage Gen2. To see a list of supported Azure services, their level of support, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md).
## Monitor telemetry
-Monitoring use and performance is an important part of operationalizing your service. Examples include frequent operations, operations with high latency, or operations that cause service-side throttling.
+Monitoring use and performance is an important part of operationalizing your service. Examples include frequent operations, operations with high latency, or operations that cause service-side throttling.
All of the telemetry for your storage account is available through [Azure Storage logs in Azure Monitor](monitor-blob-storage.md). This feature integrates your storage account with Log Analytics and Event Hubs, while also enabling you to archive logs to another storage account. To see the full list of metrics and resources logs and their associated schema, see [Azure Storage monitoring data reference](monitor-blob-storage-reference.md).
storage Data Lake Storage Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md
The setting for retention days is not yet supported, but you can delete logs man
Currently, the WASB driver, which was designed to work with the Blob API only, encounters problems in a few common scenarios. Specifically, when it is a client to a hierarchical namespace enabled storage account. Multi-protocol access on Data Lake Storage won't mitigate these issues.
-Using the WASB driver as a client to a hierarchical namespace enabled storage account is not supported. Instead, we recommend that you use the [Azure Blob File System (ABFS)](data-lake-storage-abfs-driver.md) driver in your Hadoop environment. If you are trying to migrate off of an on-premise Hadoop environment with a version earlier than Hadoop branch-3, then please open an Azure Support ticket so that we can get in touch with you on the right path forward for you and your organization.
+Using the WASB driver as a client to a hierarchical namespace enabled storage account is not supported. Instead, we recommend that you use the [Azure Blob File System (ABFS)](data-lake-storage-abfs-driver.md) driver in your Hadoop environment. If you are trying to migrate off of an on-premises Hadoop environment with a version earlier than Hadoop branch-3, then please open an Azure Support ticket so that we can get in touch with you on the right path forward for you and your organization.
## Soft delete for blobs capability
storage Storage Blob Block Blob Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md
# Premium block blob storage accounts
-Premium block blob storage accounts make data available via high-performance hardware. Data is stored on solid-state drives (SSDs) which are optimized for low latency. SSDs provide higher throughput compared to traditional hard drives. File transfer is much faster because data is stored on instantly accessible memory chips. All parts of a drive accessible at once. By contrast, the performance of a hard disk drive (HDD) depends on the proximity of data to the read/write heads.
+Premium block blob storage accounts make data available via high-performance hardware. Data is stored on solid-state drives (SSDs) which are optimized for low latency. SSDs provide higher throughput compared to traditional hard drives. File transfer is much faster because data is stored on instantly accessible memory chips. All parts of a drive accessible at once. By contrast, the performance of a hard disk drive (HDD) depends on the proximity of data to the read/write heads.
## High performance workloads Premium block blob storage accounts are ideal for workloads that require fast and consistent response times and/or have a high number of input output operations per second (IOP). Example workloads include: -- **Interactive workloads**. Highly interactive and real-time applications must write data quickly. E-commerce and mapping applications often require instant updates and user feedback. For example, in an e-commerce application, less frequently viewed items are likely not cached. However, they must be instantly displayed to the customer on demand. Interactive editing or multi-player online gaming applications maintain a quality experience by providing real-time updates.
+- **Interactive workloads**. Highly interactive and real-time applications must write data quickly. E-commerce and mapping applications often require instant updates and user feedback. For example, in an e-commerce application, less frequently viewed items are likely not cached. However, they must be instantly displayed to the customer on demand. Interactive editing or multi-player online gaming applications maintain a quality experience by providing real-time updates.
- **IoT/ streaming analytics**. In an IoT scenario, lots of smaller write operations might be pushed to the cloud every second. Large amounts of data might be taken in, aggregated for analysis purposes, and then deleted almost immediately. The high ingestion capabilities of premium block blob storage make it efficient for this type of workload. - **Artificial intelligence/machine learning (AI/ML)**. AI/ML deals with the consumption and processing of different data types like visuals, speech, and text. This high-performance computing type of workload deals with large amounts of data that requires rapid response and efficient ingestion times for data analysis. ## Cost effectiveness
-
+ Premium block blob storage accounts have a higher storage cost but a lower transaction cost as compared to standard general-purpose v2 accounts. If your applications and workloads execute a large number of transactions, premium blob blob storage can be cost-effective, especially if the workload is write-heavy.
-In most cases, workloads executing more than 35 to 40 transactions per second per terabyte (TPS/TB) are good candidates for this type of account. For example, if your workload executes 500 million read operations and 100 million write operations in a month, then you can calculate the TPS/TB as follows:
+In most cases, workloads executing more than 35 to 40 transactions per second per terabyte (TPS/TB) are good candidates for this type of account. For example, if your workload executes 500 million read operations and 100 million write operations in a month, then you can calculate the TPS/TB as follows:
-- Write transactions per second = 100,000,000 / (30 x 24 x 60 x 60) = **39** (_rounded to the nearest whole number_)
+- Write transactions per second = 100,000,000 / (30 x 24 x 60 x 60) = **39** (_rounded to the nearest whole number_)
- Read transactions per second = 500,000,000 / (30 x 24 x 60 x 60) = **193** (_rounded to the nearest whole number_) -- Total transactions per second = **193** + **39** = **232**
+- Total transactions per second = **193** + **39** = **232**
-- Assuming your account had **5TB** data on average, then TPS/TB would be **230 / 5** = **46**.
+- Assuming your account had **5TB** data on average, then TPS/TB would be **230 / 5** = **46**.
> [!NOTE]
-> Prices differ per operation and per region. Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to compare pricing between standard and premium performance tiers.
+> Prices differ per operation and per region. Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to compare pricing between standard and premium performance tiers.
The following table demonstrates the cost-effectiveness of premium block blob storage accounts. The numbers in this table are based on a Azure Data Lake Storage Gen2 enabled premium block blob storage account (also referred to as the [premium tier for Azure Data Lake Storage](premium-tier-for-data-lake-storage.md)). Each column represents the number of transactions in a month. Each row represents the percentage of transactions that are read transactions. Each cell in the table shows the percentage of cost reduction associated with a read transaction percentage and the number of transactions executed.
For example, assuming that your account is in the East US 2 region, the number o
> ![Performance table](./media/storage-blob-performance-tiers/premium-performance-data-lake-storage-cost-analysis-table.png) > [!NOTE]
-> If you prefer to evaluate cost effectiveness based on the number of transactions per second for each TB of data, you can use the column headings that appear at the bottom of the table.
+> If you prefer to evaluate cost effectiveness based on the number of transactions per second for each TB of data, you can use the column headings that appear at the bottom of the table.
## Premium scenarios
-This section contains real-world examples of how some of our Azure Storage partners use premium block blob storage. Some of them also enable Azure Data Lake Storage Gen2 which introduces a hierarchical file structure that can further enhance transaction performance in certain scenarios.
+This section contains real-world examples of how some of our Azure Storage partners use premium block blob storage. Some of them also enable Azure Data Lake Storage Gen2 which introduces a hierarchical file structure that can further enhance transaction performance in certain scenarios.
> [!TIP]
-> If you have an analytics use case, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account.
+> If you have an analytics use case, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account.
This section contains the following examples: -- [Fast data hydration](#fast-data-hydration)-- [Interactive editing applications](#interactive-editing-applications)-- [Data visualization software](#data-visualization-software)-- [E-commerce businesses](#e-commerce-businesses)-- [Interactive analytics](#interactive-analytics)-- [Data processing pipelines](#data-processing-pipelines)-- [Internet of Things (IoT)](#internet-of-things-iot)-- [Machine Learning](#machine-learning)-- [Real-time streaming analytics](#real-time-streaming-analytics)
+- [Premium block blob storage accounts](#premium-block-blob-storage-accounts)
+ - [High performance workloads](#high-performance-workloads)
+ - [Cost effectiveness](#cost-effectiveness)
+ - [Premium scenarios](#premium-scenarios)
+ - [Fast data hydration](#fast-data-hydration)
+ - [Interactive editing applications](#interactive-editing-applications)
+ - [Data visualization software](#data-visualization-software)
+ - [E-commerce businesses](#e-commerce-businesses)
+ - [Interactive analytics](#interactive-analytics)
+ - [Data processing pipelines](#data-processing-pipelines)
+ - [Internet of Things (IoT)](#internet-of-things-iot)
+ - [Machine Learning](#machine-learning)
+ - [Real-time streaming analytics](#real-time-streaming-analytics)
+ - [Getting started with premium](#getting-started-with-premium)
+ - [Check for Blob Storage feature compatibility](#check-for-blob-storage-feature-compatibility)
+ - [Create a new Storage account](#create-a-new-storage-account)
+ - [See also](#see-also)
### Fast data hydration
-Premium block blob storage can help you *hydrate* or bring up your environment quickly. In industries such as banking, certain regulatory requirements might require companies to regularly tear down their environments, and then bring them back up from scratch. The data used to hydrate their environment must load quickly.
+Premium block blob storage can help you *hydrate* or bring up your environment quickly. In industries such as banking, certain regulatory requirements might require companies to regularly tear down their environments, and then bring them back up from scratch. The data used to hydrate their environment must load quickly.
Some of our partners store a copy of their MongoDB instance each week to a premium block blob storage account. The system is then torn down. To get the system back online quickly again, the latest copy of the MongoDB instance is read and loaded. For audit purposes, previous copies are maintained in cloud storage for a period of time. ### Interactive editing applications
-In applications where multiple users edit the same content, the speed of updates becomes critical for a smooth user experience.
+In applications where multiple users edit the same content, the speed of updates becomes critical for a smooth user experience.
-Some of our partners develop video editing software. Any update that a user makes to a video is immediately visible to other users. Users can focus on their tasks instead of waiting for content updates to appear. The low latencies associated with premium block blob storage helps to create this seamless and collaborative experience.
+Some of our partners develop video editing software. Any update that a user makes to a video is immediately visible to other users. Users can focus on their tasks instead of waiting for content updates to appear. The low latencies associated with premium block blob storage helps to create this seamless and collaborative experience.
### Data visualization software
-Users can be far more productive with data visualization software if rendering time is quick.
-
-We've seen companies in the mapping industry use mapping editors to detect issues with maps. These editors use data that is generated from customer Global Positioning System (GPS) data. To create map overlaps, the editing software renders small sections of a map by quickly performing key lookups.
+Users can be far more productive with data visualization software if rendering time is quick.
+
+We've seen companies in the mapping industry use mapping editors to detect issues with maps. These editors use data that is generated from customer Global Positioning System (GPS) data. To create map overlaps, the editing software renders small sections of a map by quickly performing key lookups.
In one case, before using premium block blob storage, a partner used HBase clusters backed by standard general-purpose v2 storage. However, it became expensive to keep large clusters running all of the time. This partner decided to move away from this architecture, and instead used premium block blob storage for fast key lookups. To create overlaps, they used REST APIs to render tiles corresponding to GPS coordinates. The premium block blob storage account provided them with a cost-effective solution, and latencies were far more predictable. ### E-commerce businesses
-In addition to supporting their customer facing stores, e-commerce businesses might also provide data warehousing and analytics solutions to internal teams. We've seen partners use premium block blob storage accounts to support the low latency requirements by these data warehousing and analytics solutions. In one case, a catalog team maintains a data warehousing application for data that pertains to offers, pricing, ship methods, suppliers, inventory, and logistics. Information is queried, scanned, extracted, and mined for multiple use cases. The team runs analytics on this data to provide various merchandising teams with relevant insights and information.
+In addition to supporting their customer facing stores, e-commerce businesses might also provide data warehousing and analytics solutions to internal teams. We've seen partners use premium block blob storage accounts to support the low latency requirements by these data warehousing and analytics solutions. In one case, a catalog team maintains a data warehousing application for data that pertains to offers, pricing, ship methods, suppliers, inventory, and logistics. Information is queried, scanned, extracted, and mined for multiple use cases. The team runs analytics on this data to provide various merchandising teams with relevant insights and information.
### Interactive analytics
-In almost every industry, there is a need for enterprises to query and analyze their data interactively.
+In almost every industry, there is a need for enterprises to query and analyze their data interactively.
-Data scientists, analysts, and developers can derive time-sensitive insights faster by running queries on data that is stored in a premium block blob storage account. Executives can load their dashboards much more quickly when the data that appears in those dashboards comes from a premium block blob storage account instead of a standard general-purpose v2 account.
+Data scientists, analysts, and developers can derive time-sensitive insights faster by running queries on data that is stored in a premium block blob storage account. Executives can load their dashboards much more quickly when the data that appears in those dashboards comes from a premium block blob storage account instead of a standard general-purpose v2 account.
In one scenario, analysts needed to analyze telemetry data from millions of devices quickly to better understand how their products are used, and to make product release decisions. Storing data in SQL databases is expensive. To reduce cost, and to increase queryable surface area, they used an Azure Data Lake Storage Gen2 enabled premium block blob storage account and performed computation in Presto and Spark to produce insights from hive tables. This way, even rarely accessed data has all of the same power of compute as frequently accessed data.
-To close the gap between SQL's subsecond performance and Presto's input output operations per second (IOPs) to external storage, consistency and speed are critical, especially when dealing with small optimized row columnar (ORC) files. A premium block blob storage account when used with Data Lake Storage Gen2, has repeatedly demonstrated a 3X performance improvement over a standard general-purpose v2 account in this scenario. Queries executed fast enough to feel local to the compute machine.
+To close the gap between SQL's subsecond performance and Presto's input output operations per second (IOPs) to external storage, consistency and speed are critical, especially when dealing with small optimized row columnar (ORC) files. A premium block blob storage account when used with Data Lake Storage Gen2, has repeatedly demonstrated a 3X performance improvement over a standard general-purpose v2 account in this scenario. Queries executed fast enough to feel local to the compute machine.
-In another case, a partner stores and queries logs that are generated from their security solution. The logs are generated by using Databricks, and then and stored in a Data Lake Storage Gen2 enabled premium block blob storage account. End users query and search this data by using Azure Data Explorer. They chose this type of account to increase stability and increase the performance of interactive queries. They also set the life cycle management `Delete Action` policy to a few days, which helps to reduce costs. This policy prevents them from keeping the data forever. Instead, data is deleted once it is no longer needed.
+In another case, a partner stores and queries logs that are generated from their security solution. The logs are generated by using Databricks, and then and stored in a Data Lake Storage Gen2 enabled premium block blob storage account. End users query and search this data by using Azure Data Explorer. They chose this type of account to increase stability and increase the performance of interactive queries. They also set the life cycle management `Delete Action` policy to a few days, which helps to reduce costs. This policy prevents them from keeping the data forever. Instead, data is deleted once it is no longer needed.
### Data processing pipelines
-In almost every industry, there is a need for enterprises to process data. Raw data from multiple sources needs to be cleansed and processed so that it becomes useful for downstream consumption in tools such as data dashboards that help users make decisions.
+In almost every industry, there is a need for enterprises to process data. Raw data from multiple sources needs to be cleansed and processed so that it becomes useful for downstream consumption in tools such as data dashboards that help users make decisions.
-While speed of processing is not always the top concern when processing data, some industries require it. For example, companies in the financial services industry often need to process data reliably and in the quickest way possible. To detect fraud, those companies must process inputs from various sources, identify risks to their customers, and take swift action.
+While speed of processing is not always the top concern when processing data, some industries require it. For example, companies in the financial services industry often need to process data reliably and in the quickest way possible. To detect fraud, those companies must process inputs from various sources, identify risks to their customers, and take swift action.
In some cases, we've seen partners use multiple standard storage accounts to store data from various sources. Some of this data is then moved to a Data Lake Storage enabled premium block blob storage account where a data processing application frequently reads newly arriving data. Directory listing calls in this account were much faster and performed much more consistently than they would otherwise perform in a standard general-purpose v2 account. The speed and consistency offered by the account ensured that new data was always made available to downstream processing systems as quickly as possible. This helped them catch and act upon potential security risks promptly.
-
+ ### Internet of Things (IoT) IoT has become a significant part of our daily lives. IoT is used to track car movements, control lights, and monitor our health. It also has industrial applications. For example, companies use IoT to enable their smart factory projects, improve agricultural output, and on oil rigs for predictive maintenance. Premium block blob storage accounts add significant value to these scenarios.
-
-We have partners in the mining industry. They use a Data Lake Storage Gen2 enable premium block blob storage account along with HDInsight (Hbase) to ingest time series sensor data from multiple mining equipment types, with a very taxing load profile. Premium block blob storage has helped to satisfy their need for high sample rate ingestion. It's also cost effective, because premium block blob storage is cost optimized for workloads that perform a large number of write transactions, and this workload generates a large number of small write transactions (in the tens of thousands per second).
+
+We have partners in the mining industry. They use a Data Lake Storage Gen2 enable premium block blob storage account along with HDInsight (Hbase) to ingest time series sensor data from multiple mining equipment types, with a very taxing load profile. Premium block blob storage has helped to satisfy their need for high sample rate ingestion. It's also cost effective, because premium block blob storage is cost optimized for workloads that perform a large number of write transactions, and this workload generates a large number of small write transactions (in the tens of thousands per second).
### Machine Learning In many cases, a lot of data has to be processed to train a machine learning model. To complete this processing, compute machines must run for a long time. Compared to storage costs, compute costs usually account for a much larger percentage of your bill, so reducing the amount of time that your compute machines run can lead to significant savings. The low latency that you get by using premium block blob storage can significantly reduce this time and your bill.
-We have partners that deploy data processing pipelines to spark clusters where they run machine learning training and inference. They store spark tables (parquet files) and checkpoints to a premium block blob storage account. Spark checkpoints can create a huge number of nested files and folders. Their directory listing operations are fast because they combined the low latency of a premium block blob storage account with the hierarchical data structure made available with Data Lake Storage Gen2.
+We have partners that deploy data processing pipelines to spark clusters where they run machine learning training and inference. They store spark tables (parquet files) and checkpoints to a premium block blob storage account. Spark checkpoints can create a huge number of nested files and folders. Their directory listing operations are fast because they combined the low latency of a premium block blob storage account with the hierarchical data structure made available with Data Lake Storage Gen2.
-We also have partners in the semiconductor industry with use cases that intersect IoT and machine learning. IoT devices attached to machines in the manufacturing plant take images of semiconductor wafers and send those to their account. Using deep learning inference, the system can inform the on-premise machines if there is an issue with the production and if an action needs to be taken. They mush be able to load and process images quickly and reliably. Using Data Lake Storage Gen2 enabled premium block blob storage account helps to make this possible.
+We also have partners in the semiconductor industry with use cases that intersect IoT and machine learning. IoT devices attached to machines in the manufacturing plant take images of semiconductor wafers and send those to their account. Using deep learning inference, the system can inform the on-premises machines if there is an issue with the production and if an action needs to be taken. They mush be able to load and process images quickly and reliably. Using Data Lake Storage Gen2 enabled premium block blob storage account helps to make this possible.
### Real-time streaming analytics
Data is uploaded into multiple premium performance Blob Storage accounts. Each a
## Getting started with premium
-First, check to make sure your favorite Blob Storage features are compatible with premium block blob storage accounts, then create the account.
+First, check to make sure your favorite Blob Storage features are compatible with premium block blob storage accounts, then create the account.
>[!NOTE]
-> You can't convert an existing standard general-purpose v2 storage account to a premium block blob storage account. To migrate to a premium block blob storage account, you must create a premium block blob storage account, and migrate the data to the new account.
+> You can't convert an existing standard general-purpose v2 storage account to a premium block blob storage account. To migrate to a premium block blob storage account, you must create a premium block blob storage account, and migrate the data to the new account.
### Check for Blob Storage feature compatibility
To create a premium block blob storage account, make sure to choose the **Premiu
> [!NOTE] > Some Blob Storage features aren't yet supported or have partial support in premium block blob storage accounts. Before choosing premium, review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to determine whether the features that you intend to use are fully supported in your account. Feature support is always expanding so make sure to periodically review this article for updates.
-If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account. To unlock Azure Data Lake Storage Gen2 capabilities, enable the **Hierarchical namespace** setting in the **Advanced** tab of the **Create storage account** page.
+If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account. To unlock Azure Data Lake Storage Gen2 capabilities, enable the **Hierarchical namespace** setting in the **Advanced** tab of the **Create storage account** page.
The following image shows this setting in the **Create storage account** page.
storage Storage Use Azcopy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-troubleshoot.md
Title: Troubleshoot problems with AzCopy (Azure Storage) | Microsoft Docs
-description: Find workarounds to common issues with AzCopy v10.
+description: Find workarounds to common issues with AzCopy v10.
This article describes common issues that you might encounter while using AzCopy
## Identifying problems
-You can determine whether a job succeeds by looking at the exit code.
+You can determine whether a job succeeds by looking at the exit code.
-If the exit code is `0-success`, then the job completed successfully.
+If the exit code is `0-success`, then the job completed successfully.
-If the exit code is `error`, then examine the log file. Once you understand the exact error message, then it becomes much easier to search for the right key words and figure out the solution. To learn more, see [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md).
+If the exit code is `1-error`, then examine the log file. Once you understand the exact error message, then it becomes much easier to search for the right key words and figure out the solution. To learn more, see [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md).
-If the exit code is `panic`, then check the log file exists. If the file doesn't exist, file a bug or reach out to support.
+If the exit code is `2-panic`, then check the log file exists. If the file doesn't exist, file a bug or reach out to support.
## 403 errors
-It's common to encounter 403 errors. Sometimes they're benign and don't result in failed transfer. For example, in AzCopy logs, you might see that a HEAD request received 403 errors. Those errors appear when AzCopy checks whether a resource is public. In most cases, you can ignore those instances.
+It's common to encounter 403 errors. Sometimes they're benign and don't result in failed transfer. For example, in AzCopy logs, you might see that a HEAD request received 403 errors. Those errors appear when AzCopy checks whether a resource is public. In most cases, you can ignore those instances.
-In some cases 403 errors can result in a failed transfer. If this happens, other attempts to transfer files will likely fail until you resolve the issue. 403 errors can occur as a result of authentication and authorization issues. They can also occur when requests are blocked due to the storage account firewall configuration.
+In some cases 403 errors can result in a failed transfer. If this happens, other attempts to transfer files will likely fail until you resolve the issue. 403 errors can occur as a result of authentication and authorization issues. They can also occur when requests are blocked due to the storage account firewall configuration.
### Authentication / Authorization issues
In some cases 403 errors can result in a failed transfer. If this happens, other
If you're using a shared access signature (SAS) token, verify the following: -- The expiration and start times of the SAS token are appropriate.
+- The expiration and start times of the SAS token are appropriate.
- You selected all the necessary permissions for the token.
If you're using a shared access signature (SAS) token, verify the following:
##### Azure RBAC
-If you're using role based access control (Azure RBAC) roles via the `azcopy login` command, verify that you have the appropriate Azure roles assigned to your identity (For example: the Storage Blob Data Contributor role).
+If you're using role based access control (Azure RBAC) roles via the `azcopy login` command, verify that you have the appropriate Azure roles assigned to your identity (For example: the Storage Blob Data Contributor role).
To learn more about Azure roles, see [Assign an Azure role for access to blob data](../blobs/assign-azure-role-data-access.md). ##### ACLs
-If you're using access control lists (ACLs), verify that your identity appears in an ACL entry for each file or directory that you intend to access. Also, make sure that each ACL entry reflects the appropriate permission level.
+If you're using access control lists (ACLs), verify that your identity appears in an ACL entry for each file or directory that you intend to access. Also, make sure that each ACL entry reflects the appropriate permission level.
-To learn more about ACLs and ACL entries, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control.md).
+To learn more about ACLs and ACL entries, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control.md).
To learn about how to incorporate Azure roles together with ACLs, and how system evaluates them to make authorization decisions, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md).
To learn about how to incorporate Azure roles together with ACLs, and how system
If the storage firewall configuration isn't configured to allow access from the machine where AzCopy is running, AzCopy operations will return an HTTP 403 error.
-##### Transferring data from or to a local machine
+##### Transferring data from or to a local machine
-If you're uploading or downloading data between a storage account and an on-premise machine, make sure that the machine that runs AzCopy is able to access either the source or destination storage account. You might have to use IP network rules in the firewall settings of either the source **or** destination accounts to allow access from the public IP address of the machine.
+If you're uploading or downloading data between a storage account and an on-premises machine, make sure that the machine that runs AzCopy is able to access either the source or destination storage account. You might have to use IP network rules in the firewall settings of either the source **or** destination accounts to allow access from the public IP address of the machine.
##### Transferring data between storage accounts
Here are the endpoints that AzCopy needs to use:
This error is often related to the use of a proxy, which is using a Secure Sockets Layer (SSL) certificate that isn't trusted by the operating system. Verify your settings and make sure that the certificate is trusted at the operating system level.
-We recommend adding the certificate to your machine's root certificate store as that's where the trusted authorities are kept.
+We recommend adding the certificate to your machine's root certificate store as that's where the trusted authorities are kept.
## Unrecognized Parameters
To help you understand commands, we provide an education tool located [here](htt
## Conditional access policy error
-You can receive the following error when you invoke the `azcopy login` command.
+You can receive the following error when you invoke the `azcopy login` command.
"Failed to perform login command: failed to login with tenantID "common", Azure directory endpoint "https://login.microsoftonline.com", autorest/adal/devicetoken: -REDACTED- AADSTS50005: User tried to log in to a device from a platform (Unknown) that's currently not supported through Conditional Access policy. Supported device platforms are: iOS, Android, Mac, and Windows flavors.
Trace ID: -REDACTED-
Correlation ID: -REDACTED- Timestamp: 2021-01-05 01:58:28Z"
-This error means that your administrator has configured a conditional access policy that specifies what type of device you can log in from. AzCopy uses the device code flow, which can't guarantee that the machine where you're using the AzCopy tool is also where you're logging in.
+This error means that your administrator has configured a conditional access policy that specifies what type of device you can log in from. AzCopy uses the device code flow, which can't guarantee that the machine where you're using the AzCopy tool is also where you're logging in.
If your device is among the list of supported platforms, then you might be able to use Storage Explorer, which integrates AzCopy for all data transfers (it passes tokens to AzCopy via the secret store) but provides a login workflow that supports passing device information. AzCopy itself also supports managed identities and service principals, which could be used as an alternative.
If your device isn't among the list of supported platforms, contact your adminis
If you see a large number of failed requests with the `503 Server Busy` status, then your requests are being throttled by the storage service. If you're seeing network errors or timeouts, you might be attempting to push through too much data across your infrastructure and that infrastructure is having difficulty handling it. In all cases, the workaround is similar.
-If you see a large file failing over and over again due to certain chunks failing each time, then try to limit the concurrent network connections or throughput limit depending on your specific case. We suggest that you first lower the performance drastically at first, observe whether it solved the initial problem, then ramp up performance again until an overall balance is achieved.
+If you see a large file failing over and over again due to certain chunks failing each time, then try to limit the concurrent network connections or throughput limit depending on your specific case. We suggest that you first lower the performance drastically at first, observe whether it solved the initial problem, then ramp up performance again until an overall balance is achieved.
For more information, see [Optimize the performance of AzCopy with Azure Storage](storage-use-azcopy-optimize.md)
If you're copying data between accounts by using AzCopy, the quality and reliabi
## Known constraints with AzCopy -- Copying data from government clouds to commercial clouds isn't supported. However, copying data from commercial clouds to government clouds is supported.
+- Copying data from government clouds to commercial clouds isn't supported. However, copying data from commercial clouds to government clouds is supported.
- Asynchronous service-side copy isn't supported. AzCopy performs synchronous copy only. In other words, by the time the job finishes, the data has been moved. -- If when copying to an Azure File share you forgot to specify the flag `--preserve-smb-permissions`, and you do not want to transfer the data again, then consider using Robocopy to bring over the permissions. --- If you're copying to Azure Files and you forgot to specify the `--preserve-smb-permissions` flag, and you don't want to transfer the data again, consider using Robocopy to bring over the only the permissions.
+- When copying to an Azure File share, if you forgot to specify the flag `--preserve-smb-permissions`, and you do not want to transfer the data again, then consider using Robocopy to bring over the permissions.
- Azure Functions has a different endpoint for MSI authentication, which AzCopy doesn't yet support.
There's a service issue impacting AzCopy 10.11+ which are using the [PutBlobFrom
## See also - [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
storage File Sync Server Endpoint Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-endpoint-delete.md
Title: Deprovision your Azure File Sync server endpoint | Microsoft Docs description: Guidance on how to deprovision your Azure File Sync server endpoint based on your use case-+ Last updated 6/01/2021-+
storage File Sync Server Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-recovery.md
Title: Recover an Azure File Sync equipped server from a server-level failure description: Learn how to recover an Azure File Sync equipped server from a server-level failure-+ Last updated 12/07/2021-+
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Title: Overview - On-premises AD DS authentication to Azure file shares
-description: Learn about Active Directory Domain Services (AD DS) authentication to Azure file shares. This article goes over support scenarios, availability, and explains how the permissions work between your AD DS and Azure active directory.
+description: Learn about Active Directory Domain Services (AD DS) authentication to Azure file shares. This article goes over support scenarios, availability, and explains how the permissions work between your AD DS and Azure Active Directory.
# Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares
-[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) through two types of Domain
+[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) through two types of Domain
-If you are new to Azure file shares, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles.
+If you're new to Azure file shares, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles.
## Applies to | File share type | SMB | NFS |
If you are new to Azure file shares, we recommend reading our [planning guide](s
- Supports Azure file shares managed by Azure File Sync. - Supports Kerberos authentication with AD with [AES 256 encryption](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption) (recommended) and RC4-HMAC. AES 128 Kerberos encryption is not yet supported. - Supports single sign-on experience.-- Only supported on clients running on OS versions newer than Windows 7 or Windows Server 2008 R2.
+- Only supported on clients running OS versions Windows 8/Windows Server 2012 or newer.
- Only supported against the AD forest that the storage account is registered to. You can only access Azure file shares with the AD DS credentials from a single forest by default. If you need to access your Azure file share from a different forest, make sure that you have the proper forest trust configured, see the [FAQ](storage-files-faq.md#ad-ds--azure-ad-ds-authentication) for details. - Does not support authentication against computer accounts created in AD DS. - Does not support authentication against Network File System (NFS) file shares.
-When you enable AD DS for Azure file shares over SMB, your AD DS-joined machines can mount Azure file shares using your existing AD DS credentials. This capability can be enabled with an AD DS environment hosted either in on-prem machines or hosted in Azure.
+When you enable AD DS for Azure file shares over SMB, your AD DS-joined machines can mount Azure file shares using your existing AD DS credentials. This capability can be enabled with an AD DS environment hosted either in on-premises machines or hosted in Azure.
## Videos
To help you setup Azure Files AD authentication for some common use cases, we pu
## Prerequisites
-Before you enable AD DS authentication for Azure file shares, make sure you have completed the following prerequisites:
+Before you enable AD DS authentication for Azure file shares, make sure you've completed the following prerequisites:
- Select or create your [AD DS environment](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) and [sync it to Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) with Azure AD Connect.
- You can enable the feature on a new or existing on-premises AD DS environment. Identities used for access must be synced to Azure AD or use a default share-level permission. The Azure AD tenant and the file share that you are accessing must be associated with the same subscription.
+ You can enable the feature on a new or existing on-premises AD DS environment. Identities used for access must be synced to Azure AD or use a default share-level permission. The Azure AD tenant and the file share that you're accessing must be associated with the same subscription.
- Domain-join an on-premises machine or an Azure VM to on-premises AD DS. For information about how to domain-join, refer to [Join a Computer to a Domain](/windows-server/identity/ad-fs/deployment/join-a-computer-to-a-domain).
- If your machine is not domain joined to an AD DS, you may still be able to leverage AD credentials for authentication if your machine has line of sight of the AD domain controller.
+ If your machine is not domain joined to an AD DS, you may still be able to leverage AD credentials for authentication if your machine has line of sight to the AD domain controller.
- Select or create an Azure storage account. For optimal performance, we recommend that you deploy the storage account in the same region as the client from which you plan to access the share. Then, [mount the Azure file share](storage-how-to-use-files-windows.md) with your storage account key. Mounting with the storage account key verifies connectivity.
- Make sure that the storage account containing your file shares is not already configured for Azure AD DS Authentication. If Azure Files Azure AD DS authentication is enabled on the storage account, it needs to be disabled before changing to use on-premises AD DS. This implies that existing ACLs configured in Azure AD DS environment will need to be reconfigured for proper permission enforcement.
+ Make sure that the storage account containing your file shares isn't already configured for Azure AD DS Authentication. If Azure Files Azure AD DS authentication is enabled on the storage account, it needs to be disabled before changing to use on-premises AD DS. This implies that existing ACLs configured in Azure AD DS environment will need to be reconfigured for proper permission enforcement.
If you experience issues in connecting to Azure Files, refer to [the troubleshooting tool we published for Azure Files mounting errors on Windows](https://azure.microsoft.com/blog/new-troubleshooting-diagnostics-for-azure-files-mounting-errors-on-windows/).
Azure Files authentication with AD DS is available in [all Azure Public, China a
## Overview
-If you plan to enable any networking configurations on your file share, we recommend you to read the [networking considerations](./storage-files-networking-overview.md) article and complete the related configuration before enabling AD DS authentication.
+If you plan to enable any networking configurations on your file share, we recommend you read the [networking considerations](./storage-files-networking-overview.md) article and complete the related configuration before enabling AD DS authentication.
-Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares with your on-prem AD DS credentials. Further, it allows you to better manage your permissions to allow granular access control. Doing this requires synching identities from on-prem AD DS to Azure AD with AD connect. You control the share level access with identities synced to Azure AD while managing file/share level access with on-prem AD DS credentials.
+Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares with your on-premises AD DS credentials. Further, it allows you to better manage your permissions to allow granular access control. Doing this requires synching identities from on-premises AD DS to Azure AD with AD Connect. You control the share level access with identities synced to Azure AD while managing file/share level access with on-premises AD DS credentials.
Next, follow the steps below to set up Azure Files for AD DS Authentication:
The following diagram illustrates the end-to-end workflow for enabling Azure AD
![Files AD workflow diagram](media/storage-files-active-directory-domain-services-enable/diagram-files-ad.png)
-Identities used to access Azure file shares must be synced to Azure AD to enforce share level file permissions through the [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) model. Alternatively, you can use a default share-level permission. [Windows-style DACLs](/previous-versions/technet-magazine/cc161041(v=msdn.10)) on files/directories carried over from existing file servers will be preserved and enforced. This offers seamless integration with your enterprise AD DS environment. As you replace on-prem file servers with Azure file shares, existing users can access Azure file shares from their current clients with a single sign-on experience, without any change to the credentials in use.
+Identities used to access Azure file shares must be synced to Azure AD to enforce share-level file permissions through the [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) model. Alternatively, you can use a default share-level permission. [Windows-style DACLs](/previous-versions/technet-magazine/cc161041(v=msdn.10)) on files/directories carried over from existing file servers will be preserved and enforced. This offers seamless integration with your enterprise AD DS environment. As you replace on-premises file servers with Azure file shares, existing users can access Azure file shares from their current clients with a single sign-on experience, without any change to the credentials in use.
## Next steps
storage Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-managed-identity.md
The following code example shows how to get the authenticated token credential a
public static void CreateQueue(string accountName, string queueName) { // Construct the blob container endpoint from the arguments.
- string queueEndpoint = string.Format("https://{0}.queue.core.windows.net/{1}",
- accountName,
- queueName);
+ string queueEndpoint = $"https://{accountName}.queue.core.windows.net/{queueName}";
// Get a token credential and create a service client object for the queue.
- QueueClient queueClient = new QueueClient(new Uri(queueEndpoint),
- new DefaultAzureCredential());
+ QueueClient queueClient = new QueueClient(
+ new Uri(queueEndpoint),
+ new DefaultAzureCredential());
try {
storage Cirrus Data Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/cirrus-data-migration-guide.md
Title: Migrate your block data to Azure with Cirrus Data
-description: Provides quick start guide to implement Cirrus Migrate Cloud, and migrate your data to Azure
+description: Learn how Cirrus Migrate Cloud enables disk migration from an existing storage system or cloud to Azure. The original system operates during migration.
Previously updated : 09/06/2021- Last updated : 06/10/2022++ # Migrate your block data to Azure with Cirrus Migrate Cloud
-Cirrus Migrate Cloud (CMC) enables disk migration from an existing storage system, or cloud to Azure. Migration is performed while the original system is still in operation. This document will present the methodology to successfully configure and execute the migration.
+Cirrus Migrate Cloud (CMC) enables disk migration from an existing storage system or cloud to Azure. Migration proceeds while the original system is still in operation. This article presents the methodology to successfully configure and execute the migration.
-## Overview
+The solution uses distributed Migration Agents that run on every host. The agents allow direct Host-to-Host connections. Each Host-to-Host migration is independent, which makes the solution infinitely scalable. There are no central bottlenecks for the dataflow. The migration uses cMotionΓäó technology to ensure no effect on production.
-The solution uses distributed Migration Agents running on every host that allows direct Host-to-Host connections. Each Host-to-Host migration is independent making the solution infinitely scalable, without central bottlenecks for the dataflow. The migration is using cMotionΓäó technology to ensure no impact on production.
+## Migration use cases
-## Use cases
-
-This document covers a generic migration case for moving the application from one virtual machine (running on-premises or in another cloud provider) to a virtual machine in Azure. For deeper step-by-step guides in various use cases, you can learn more on the following links:
+This document covers a generic migration case for moving an application from one virtual machine to a virtual machine in Azure. The virtual machine can be on-premises or in another cloud provider. For step-by-step guides in various use cases, see the following links:
- [Moving the workload to Azure with cMotion](https://support.cirrusdata.cloud/en/article/howto-cirrus-migrate-cloud-on-premises-to-azure-1xo3nuf/) - [Moving from Premium Disks to Ultra Disks](https://support.cirrusdata.cloud/en/article/howto-cirrus-migrate-cloud-migration-between-azure-tiers-sxhppt/) - [Moving from AWS to Azure](https://support.cirrusdata.cloud/en/article/howto-cirrus-migrate-cloud-migration-from-aws-to-azure-weegd9/.)
-## Components
+## Cirrus Migrate Cloud Components
Cirrus Migrate Cloud consists of multiple components: -- **cMotionΓäó** feature of CMC does a storage-level cut-over from a source to the target cloud without downtime to the source host. cMotionΓäó is used to swing the workload over from the original FC or iSCSI source disk to the new destination Azure Managed Disk.-- **Web-based Management Portal** is web-based management as a service. It allows users to manage migration and protect any block storage. Web-based Management Portal provides interfaces for all CMC application configurations, management, and administrative tasks.
+- The *cMotionΓäó feature* of CMC does a storage-level cut-over from a source to the target cloud without downtime to the source host. cMotionΓäó is used to swing the workload over from the original FC or iSCSI source disk to the new destination Azure Managed Disk.
+- *Web-based Management Portal* is web-based management as a service. It allows users to manage migration and protect any block storage. Web-based Management Portal provides interfaces for all CMC application configurations, management, and administrative tasks.
- :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-web-portal.jpg" alt-text="Screenshot of CMC Portal":::
+ :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-web-portal.jpg" alt-text="Screenshot of C M C Portal with the menu tabs, fields for the tab, and migration project owner called out.":::
## Implementation guide
-User should follow the Azure best practices to implement a new virtual machine. If not familiar with the process, learn more from [quick start guide](../../../../virtual-machines/windows/quick-create-portal.md).
+Follow the Azure best practices to implement a new virtual machine. For more information, see [quick start guide](../../../../virtual-machines/windows/quick-create-portal.md).
Before starting the migration, make sure the following prerequisites have been met: -- Verify that the OS in Azure is properly licensed-- Verify access to the Azure Virtual Machine-- Check that the application / database license is available to run in Azure-- Check the permission to auto-allocate the destination disk size-- Ensure that managed disk is the same size or larger than the source disk -- Ensure that either the source, or the destination virtual machine has a port open to allow our H2H connection.
+- Verify that the OS in Azure is properly licensed.
+- Verify access to the Azure Virtual Machine.
+- Check that the application / database license is available to run in Azure.
+- Check the permission to auto-allocate the destination disk size.
+- Ensure that managed disk is the same size or larger than the source disk.
+- Ensure that either the source or the destination virtual machine has a port open to allow our H2H connection.
+
+Follow these implementation steps:
-1. **Prepare the Azure virtual machine**. Document is assuming that virtual machine is fully implemented. So, once the data disks are migrated, the destination host can immediately start up the application, and bring it online. State of the data will be the same as the source when it was shut down seconds ago. CMC does not migrate the OS disk from source to destination.
+1. **Prepare the Azure virtual machine**. The virtual machine must be fully implemented. Once the data disks are migrated, the destination host can immediately start the application and bring it online. State of the data is the same as the source when it was shut down seconds ago. CMC doesn't migrate the OS disk from source to destination.
-1. **Prepare the application in the Azure virtual machine**. In this example, the source is Linux host. It can run any user application accessing the respective BSD storage. We will use a database application running at the source using a 1 GiB disk as a source storage device. However, any application can be used instead. Set up a virtual machine in Azure ready to be used as the destination virtual machine. Make sure that resource configuration and operating system are compatible with the application, and ready to receive the migration from the source using CMC portal. The destination block storage device/s will be automatically allocated and created during the migration process.
+1. **Prepare the application in the Azure virtual machine**. In this example, the source is Linux host. It can run any user application accessing the respective BSD storage. This example uses a database application running at the source using a 1-GiB disk as a source storage device. However, any application can be used instead. Set up a virtual machine in Azure ready to be used as the destination virtual machine. Make sure that resource configuration and operating system are compatible with the application, and ready to receive the migration from the source using CMC portal. The destination block storage devices are automatically allocated and created during the migration process.
-1. **Sign up for CMC account**. To obtain a CMC account, follow the support page for the exact instructions on how to get an account. More details can be read [here](https://support.cirrusdata.cloud/en/article/licensing-m4lhll/).
+1. **Sign up for CMC account**. To obtain a CMC account, follow the support page for instructions on how to get an account. For more information, see [Licensing Model](https://support.cirrusdata.cloud/en/article/licensing-m4lhll/).
-1. **Create a Migration Project** reflecting the specific migration characteristics, type, owner of the migration, and any details needed to define the operations.
+1. **Create a Migration Project**. The project reflects the specific migration characteristics, type, owner of the migration, and any details needed to define the operations.
- :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-create-project.jpg" alt-text="Screenshot for creating a new project":::
+ :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-create-project.jpg" alt-text="Screenshot shows the Create New Project dialog.":::
1. **Define the migration project parameters**. Use the CMC web-based portal to configure the migration by defining the parameters: source, destination, and other parameters.
-1. **Install the migration CMC agents on source and destination hosts**. Using the CMC web-based management portal, select **Deploy Cirrus Migrate Cloud** to get the curl command for **New Installation**. Run the command on the source and destination command-line interface.
+1. **Install the migration CMC agents on source and destination hosts**. Using the CMC web-based management portal, select **Deploy Cirrus Migrate Cloud** to get the `curl` command for **New Installation**. Run the command on the source and destination command-line interface.
-1. **Create a bidirectional connection between source and destination hosts**. Use **H2H** tab in the CMC web-based management portal, and **Create New Connection** button. Select the device used by the application, not the device used by the Linux operating system.
+1. **Create a bidirectional connection between source and destination hosts**. Use **H2H** tab in the CMC web-based management portal. Select **Create New Connection**. Select the device used by the application, not the device used by the Linux operating system.
- :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-migration-1.jpg" alt-text="Screenshot that shows list of deployed hosts":::
+ :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-migration-1.jpg" alt-text="Screenshot that shows list of deployed hosts.":::
- :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-migration-2.jpg" alt-text="Screenshot that shows list of host-to-host connections":::
+ :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-migration-2.jpg" alt-text="Screenshot that shows list of host-to-host connections.":::
- :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-migration-3.jpg" alt-text="Screenshot that shows list of migrated devices":::
+ :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-migration-3.jpg" alt-text="Screenshot that shows list of migrated devices.":::
-1. **Start the migration to the destination virtual machine** using **Migrate Host Volumes** from the CMC web-based management portal. Follow the instructions for remote location. Use the CMC portal to **Auto allocate destination volumes** on the right of the screen.
-
-1. Next, we need to add Azure Credentials to allow connectivity and disk provisioning using the **Integrations** tab on the CMC portal. Fill in the required fields using your private companyΓÇÖs values for Azure: **Integration Name**, **Tenant ID**, **Client/Application ID**, and **Secret**. Press **Save**.
+1. **Start the migration to the destination virtual machine** using **Migrate Host Volumes** from the CMC web-based management portal. Follow the instructions for remote location. Use the CMC portal to **Auto allocate destination volumes** on the right of the screen.
- :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-migration-4.jpg" alt-text="Screenshot that shows entering Azure credentials":::
+1. Add Azure Credentials to allow connectivity and disk provisioning using the **Integrations** tab on the CMC portal. Fill in the required fields using your private companyΓÇÖs values for Azure: **Integration Name**, **Tenant ID**, **Client/Application ID**, and **Secret**. Select **Save**.
- For details on creating Azure AD application, view our [step-by-step instructions](https://support.cirrusdata.cloud/en/article/creating-an-azure-service-account-for-cirrus-data-cloud-tw2c9n/). By creating and registering Azure AD application for CMC, you enable automatic creation of Azure Managed Disks on the target virtual machine.
+ :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-migration-4.jpg" alt-text="Screenshot that shows entering Azure credentials.":::
+
+ For details on creating Azure AD application, see the [step-by-step instructions](https://support.cirrusdata.cloud/en/article/creating-an-azure-service-account-for-cirrus-data-cloud-tw2c9n/). By creating and registering Azure AD application for CMC, you enable automatic creation of Azure Managed Disks on the target virtual machine.
>[!NOTE]
- >Since you selected **Auto allocate destination volumes** on the previous step, don't press it again for a new allocation. If you do, it will output and error. Instead press **Continue**.
+ >Since you selected **Auto allocate destination volumes** on the previous step, don't select it again for a new allocation. Instead, select **Continue**.
## Migration guide
-After pressing **Save** in the previous step, **New Migration Session** window appears. Fill in the fields:
- - **Session description**: provide meaningful description
- - **Auto Resync Interval**: enable migration schedule
- - Use iQoS to select the impact migration will have on the production:
- - **Minimum** throttles migration rate to 25% of the available bandwidth
- - **Moderate** throttles migration rate to 50% of the available bandwidth
- - **Aggressive** throttles migration rate to 75% of the available bandwidth
- - **Relentless** doesn't throttle the migration.
+After selecting **Save** in the previous step, the **New Migration Session** window appears. Fill in the fields:
+
+- **Session description**: Provide meaningful description.
+- **Auto Resync Interval**: Enable migration schedule.
+- Use iQoS to select the effect migration has on the production:
+ - **Minimum** throttles migration rate to 25% of the available bandwidth.
+ - **Moderate** throttles migration rate to 50% of the available bandwidth.
+ - **Aggressive** throttles migration rate to 75% of the available bandwidth.
+ - **Relentless** doesn't throttle the migration.
- :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-iqos.jpg" alt-text="Screenshot that shows options for iQoS settings":::
+ :::image type="content" source="./media/cirrus-data-migration-guide/cirrus-iqos.jpg" alt-text="Screenshot that shows options for iQoS settings.":::
-Press **Create Session** to start the migration.
+Select **Create Session** to start the migration.
-From the start of the migration initial sync until cMotion starts, there is no need for a user interaction with CMC. Only exception is monitoring the progress. You can monitor current status, session volumes and track the changes using the dashboard.
+From the start of the migration initial sync until cMotion starts, there's no need for you to interact with CMC. You can monitor current status, session volumes, and track the changes using the dashboard.
-During the migration you can observe the blocks changed on the source device by pressing the Changed Data Map.
+During the migration, you can observe the blocks changed on the source device by selecting the **Changed Data Map**.
-Details on iQoS will show synchronized blocks, and migration status. It also shows that there is no impact to production IO.
+Details on iQoS show synchronized blocks and migration status. It also shows that there's no effect on production IO.
## Moving the workload to Azure with cMotion
-After the initial synchronization finishes, we will prepare to move the workload from the source disk to the destination Azure Managed Disk using cMotionΓäó.
+After the initial synchronization finishes, prepare to move the workload from the source disk to the destination Azure Managed Disk using cMotionΓäó.
### Start cMotionΓäó
-At this point, the systems are ready for cMotionΓäó migration cut-over.
+At this point, the systems are ready for cMotionΓäó migration cut-over.
-1. In the CMS portal select **Trigger cMotionΓäó** using Session to switch the workload from the source to the destination disk. To check if the process was done, you can use iostat, or equivalent command. Go to the terminal in the Azure virtual machine, and run *iostat /dev/<device_name>* (for example /dev/sdc), and observe that the IOs are written by the application on the destination disk in Azure cloud.
+In the CMS portal, select **Trigger cMotionΓäó** using Session to switch the workload from the source to the destination disk. To check if the process finished, you can use `iostat`, or equivalent command. Go to the terminal in the Azure virtual machine, and run `iostat /dev/<device_name>`, for example `/dev/sdc`. Observe that the IOs are written by the application on the destination disk in Azure cloud.
-In this state, the workload can be swung, or moved back to the source disk at any time. If you want to revert the production virtual machine, use the **Session Actions** button, and select the **Revert cMotionΓäó** option. We can swing back, and forth as many times we want while the application is running at source host/VM.
+In this state, the workload can be moved back to the source disk at any time. If you want to revert the production virtual machine, select **Session Actions** and select the **Revert cMotionΓäó** option. You can swing back and forth as many times we want while the application is running at source host/VM.
-When the final cut-over to the destination virtual machine is required, follow the steps:
-1. Select **Session Actions**
-2. Click the **Finalize Cutover** option to "lock-in" the cut-over to the new Azure virtual machine, and disable the option for source disk to be removed. Stop any other application running in the source host for final host cut-over.
+When the final cut-over to the destination virtual machine is required, follow these steps:
+
+1. Select **Session Actions**.
+1. Select the **Finalize Cutover** option to lock-in the cut-over to the new Azure virtual machine and disable the option for source disk to be removed.
+1. Stop any other application running in the source host for final host cut-over.
### Move the application to the destination virtual machine
-Once the cut-over has been done, application needs to be switched over to the new virtual machine. To do that, perform the following steps:
+Once the cut-over has been done, application needs to be switched over to the new virtual machine. To do that, do the following steps:
+
+1. Stop the application.
+1. Unmount the migrated device.
+1. Mount the new migrated device in the Azure virtual machine.
+1. Start the same application in the Azure virtual machine on the new migrated disk.
-1. Stop the application
-2. Unmount the migrated device
-3. Mount the new migrated device in Azure virtual machine.
-4. Start the same application in Azure virtual machine on the new migrated disk.
-
-Observe that there are no IOs going to source hosts devices by running the iostat command in the source host. Running iostat in Azure virtual machine will show that IO is executing on the Azure virtual machine terminal.
+Verify that here are no IOs going to source hosts devices by running the `iostat` command in the source host. Running `iostat` in Azure virtual machine shows that IO is running on the Azure virtual machine terminal.
-### Complete the migration session in CMC GUI
+### Complete the migration session in CMC GUI
-The migration step completed when all the IOs were redirected to the destination devices after triggering cMotionΓäó. You can now close the session using **Session Actions**. Click on **Delete Session** to close the migration session.
-As a last step, you will remove the **Cirrus Migrate Cloud Agents** from both source host and Azure virtual machine. To perform uninstall, get the **Uninstall curl command** from **Deploy Cirrus Migrate Cloud** button. Option is in the **Hosts** section of the portal.
+The migration step is complete when all the IOs were redirected to the destination devices after triggering cMotionΓäó. You can now close the session using **Session Actions**. Select **Delete Session** to close the migration session.
+As a last step, remove the **Cirrus Migrate Cloud Agents** from both source host and Azure virtual machine. To perform uninstall, get the **Uninstall curl command** from **Deploy Cirrus Migrate Cloud** button. Option is in the **Hosts** section of the portal.
-After the agents are removed, migration is fully completed. Now the source application is running in production on the destination Azure virtual machine with locally mounted disks.
+After the agents are removed, migration is fully complete. Now the source application is running in production on the destination Azure virtual machine with locally mounted disks.
## Support ### How to open a case with Azure
-In the [Azure portal](https://portal.azure.com) search for support in the search bar at the top. Select **Help + support** -> **New Support Request**.
+In the [Azure portal](https://portal.azure.com) search for support in the search bar at the top. Select **Help + support** > **New Support Request**.
### Engaging Cirrus Support In the CMC portal, select **Help Center** tab on the CMC portal to contact Cirrus Data Solutions support, or go to [CDSI website](https://support.cirrusdata.cloud/en/), and file a support request. ## Next steps-- Learn more on [Azure virtual machines](../../../../virtual-machines/windows/overview.md)-- Learn more on [Azure Managed Disks](../../../../virtual-machines/managed-disks-overview.md)-- Learn more on [storage migration](../../../common/storage-migration-overview.md)+
+- Learn more about [Azure virtual machines](../../../../virtual-machines/windows/overview.md)
+- Learn more about [Azure Managed Disks](../../../../virtual-machines/managed-disks-overview.md)
+- Learn more about [storage migration](../../../common/storage-migration-overview.md)
- [Cirrus Data website](https://www.cirrusdata.com/)-- Step-by-step guides for [cMotion](https://support.cirrusdata.cloud/en/category/howtos-1un623w/)
+- Step-by-step guides for [cMotion](https://support.cirrusdata.cloud/en/category/howtos-1un623w/)
synapse-analytics Implementation Success Evaluate Dedicated Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-dedicated-sql-pool-design.md
During the [assessment stage](implementation-success-assess-environment.md), you
## Review the target architecture
-To successfully deploy a dedicated SQL pool, it's important to adopt an architecture that's aligned with business requirements. For more information, see [Data warehousing in Microsoft Azure](/azure/architecture/data-guide/relational-dat).
+To successfully deploy a dedicated SQL pool, it's important to adopt an architecture that's aligned with business requirements. For more information, see [Data warehousing in Microsoft Azure](/azure/architecture/data-guide/relational-data/data-warehousing).
## Migration path
synapse-analytics Implementation Success Evaluate Serverless Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-serverless-sql-pool-design.md
Unlike traditional database engines, SQL serverless doesn't rely on its own opti
For reliability, evaluate the following points. - **Availability:** Validate any availability requirements that were identified during the [assessment stage](implementation-success-assess-environment.md). While there aren't any specific SLAs for SQL serverless, there's a 30-minute timeout for query execution. Identify the longest running queries from your assessment and validate them against your serverless SQL design. A 30-minute timeout could break the expectations for your workload and appear as a service problem.-- **Consistency:** SQL serverless is designed primarily for read workloads. So, validate whether all consistency checks have been performed during the data lake data provisioning and formation process. Keep abreast of new capabilities, like [Delta Lake](/spark/apache-spark-what-is-delta-lake.md) open-source storage layer, which provides support for ACID (atomicity, consistency, isolation, and durability) guarantees for transactions. This capability allows you to implement effective [lambda or kappa architectures](/azure/architecture/data-guide/big-data/) to support both streaming and batch use cases. Be sure to evaluate your design for opportunities to apply new capabilities but not at the expense of your project's timeline or cost.
+- **Consistency:** SQL serverless is designed primarily for read workloads. So, validate whether all consistency checks have been performed during the data lake data provisioning and formation process. Keep abreast of new capabilities, like [Delta Lake](/azure/synapse-analytics/spark/apache-spark-what-is-delta-lake) open-source storage layer, which provides support for ACID (atomicity, consistency, isolation, and durability) guarantees for transactions. This capability allows you to implement effective [lambda or kappa architectures](/azure/architecture/data-guide/big-data/) to support both streaming and batch use cases. Be sure to evaluate your design for opportunities to apply new capabilities but not at the expense of your project's timeline or cost.
- **Backup:** Review any disaster recovery requirements that were identified during the assessment. Validate them against your SQL serverless design for recovery. SQL serverless itself doesn't have its own storage layer and that would require handling snapshots and backup copies of your data. The data store accessed by serverless SQL is external (ADLS Gen2). Review the recovery design in your project for these datasets. ### Security
synapse-analytics Implementation Success Evaluate Solution Development Environment Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-solution-development-environment-design.md
Promoting a workspace to another workspace is a two-part process:
Ensure that integration with Azure DevOps or GitHub is properly set up. Design a repeatable process that releases changes across development, Test/QA/UAT, and production environments.  >[!IMPORTANT]
-> We recommend that sensitive configuration data always be stored securely in [Azure Key Vault](/azure/key-vault/general/basic-concepts.md). Use Azure Key Vault to maintain a central, secure location for sensitive configuration data, like database connection strings. That way, appropriate services can access configuration data from within each environment.
+> We recommend that sensitive configuration data always be stored securely in [Azure Key Vault](/azure/key-vault/general/basic-concepts). Use Azure Key Vault to maintain a central, secure location for sensitive configuration data, like database connection strings. That way, appropriate services can access configuration data from within each environment.
## Next steps
synapse-analytics Proof Of Concept Playbook Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-dedicated-sql-pool.md
You can set up a POC on Azure Synapse by following these steps:
1. Use [this quickstart](../sql-data-warehouse/create-data-warehouse-portal.md) to provision a Synapse workspace and set up storage and permissions according to the POC test plan. 1. Use [this quickstart](../quickstart-create-sql-pool-portal.md) to add a dedicated SQL pool to the Synapse workspace. 1. Set up [networking and security](security-white-paper-introduction.md) according to your requirements.
-1. Grant appropriate access to POC team members. See [this article](/azure-sql/database/logins-create-manage) about authentication and authorization for accessing dedicated SQL pools.
+1. Grant appropriate access to POC team members. See [this article](/azure/azure-sql/database/logins-create-manage) about authentication and authorization for accessing dedicated SQL pools.
> [!TIP] > We recommend that you *develop code and unit testing* by using the DW500c service level (or below). We recommend that you *run load and performance tests* by using the DW1000c service level (or above). You can [pause compute of the dedicated SQL pool](../sql-data-warehouse/pause-and-resume-compute-portal.md) at any time to cease compute billing, which will save on costs.
synapse-analytics Proof Of Concept Playbook Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-spark-pool.md
Before you begin planning your Spark POC project:
> - Identify executive or business sponsors for a big data and advanced analytics platform project. Secure their support for migration to the cloud. > - Identify availability of technical experts and business users to support you during the POC execution.
-Before you start preparing for the POC project, we recommend you first read the [Apache Spark documentation](/hdinsight/spark/apache-spark-overview.md).
+Before you start preparing for the POC project, we recommend you first read the [Apache Spark documentation](/azure/hdinsight/spark/apache-spark-overview).
> [!TIP] > If you're new to Spark pools, we recommend you work through the [Perform data engineering with Azure Synapse Apache Spark Pools](/learn/paths/perform-data-engineering-with-azure-synapse-apache-spark-pools/) learning path.
Here's an example of the needed level of specificity in planning:
- Estimate the effort for our initial historical data migration to data lake and/or the Spark pool. - Plan an approach to migrate historical data. - **Output C:** We will have tested and determined the data ingestion rate achievable in our environment and can determine whether our data ingestion rate is sufficient to migrate historical data during the available time window.
- - **Test C1:** Test different approaches of historical data migration. For more information, see [Transfer data to and from Azure](/architecture/data-guide/scenarios/data-transfer.md).
- - **Test C2:** Identify allocated bandwidth of ExpressRoute and if there is any throttling setup by the infra team. For more information, see [What is Azure ExpressRoute? (Bandwidth options)](/expressroute/expressroute-introduction#bandwidth-options.md).
- - **Test C3:** Test data transfer rate for both online and offline data migration. For more information, see [Copy activity performance and scalability guide](/data-factory/copy-activity-performance#copy-performance-and-scalability-achievable-using-azure-data-factory-and-synapse-pipelines).
- - **Test C4:** Test data transfer from the data lake to the SQL pool by using either ADF, Polybase, or the COPY command. For more information, see [Data loading strategies for dedicated SQL pool in Azure Synapse Analytics](/sql-data-warehouse/design-elt-data-loading.md).
+ - **Test C1:** Test different approaches of historical data migration. For more information, see [Transfer data to and from Azure](/azure/architecture/data-guide/scenarios/data-transfer).
+ - **Test C2:** Identify allocated bandwidth of ExpressRoute and if there is any throttling setup by the infra team. For more information, see [What is Azure ExpressRoute? (Bandwidth options)](/azure/expressroute/expressroute-introduction#bandwidth-options.md).
+ - **Test C3:** Test data transfer rate for both online and offline data migration. For more information, see [Copy activity performance and scalability guide](/azure/data-factory/copy-activity-performance#copy-performance-and-scalability-achievable-using-azure-data-factory-and-synapse-pipelines).
+ - **Test C4:** Test data transfer from the data lake to the SQL pool by using either ADF, Polybase, or the COPY command. For more information, see [Data loading strategies for dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/design-elt-data-loading).
- **Goal D:** We will have tested the data ingestion rate of incremental data loading and will have the data points to estimate the data ingestion and processing time window to the data lake and/or the dedicated SQL pool. - **Output D:** We will have tested the data ingestion rate and can determine whether our data ingestion and processing requirements can be met with the identified approach. - **Test D1:** Test the daily update data ingestion and processing.
Based upon the high-level architecture of your proposed future state architectur
If you're already using Azure, identify any resources you already have in place (Azure Active Directory, ExpressRoute, and others) that you can use during the POC. Also identify the Azure regions your organization uses. Now is a great time to identify the throughput of your ExpressRoute connection and to check with other business users that your POC can consume some of that throughput without adverse impact on production systems.
-For more information, see [Big data architectures](/architecture/data-guide/big-data.md).
+For more information, see [Big data architectures](/azure/architecture/data-guide/big-data/).
### Identify POC resources
Here are some examples of high-level tasks:
resources identified in the POC plan. 1. Load POC dataset: - Make data available in Azure by extracting from the source or by creating sample data in Azure. For more information, see:
- - [Transferring data to and from Azure](/architecture/databox/data-box-overview.md#use-cases)
+ - [Transferring data to and from Azure](/azure/databox/data-box-overview#use-cases)
- [Azure Data Box](https://azure.microsoft.com/services/databox/)
- - [Copy activity performance and scalability guide](/data-factory/copy-activity-performance#copy-performance-and-scalability-achievable-using-adf.md)
- - [Data loading strategies for dedicated SQL pool in Azure Synapse Analytics](../sql-data-warehouse/design-elt-data-loading.md)
+ - [Copy activity performance and scalability guide](/azure/data-factory/copy-activity-performance#copy-performance-and-scalability-achievable-using-adf.md)
+ - [Data loading strategies for dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/design-elt-data-loading)
- [Bulk load data using the COPY statement](../sql-data-warehouse/quickstart-bulk-load-copy-tsql.md?view=azure-sqldw-latest&preserve-view=true) - Test the dedicated connector for the Spark pool and the dedicated SQL pool. 1. Migrate existing code to the Spark pool:
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
In this article, we'll give you a brief overview of what kinds of identities and
Azure Virtual desktop supports different types of identities depending on which configuration you choose. This section explains which identities you can use for each configuration.
-### On-premise identity
+### On-premises identity
Since users must be discoverable through Azure Active Directory (Azure AD) to access the Azure Virtual Desktop, user identities that exist only in Active Directory Domain Services (AD DS) are not supported. This includes standalone Active Directory deployments with Active Directory Federation Services (AD FS).
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
Previously updated : 06/03/2022 Last updated : 06/13/2022 # Create a profile container with Azure Files and Azure Active Directory (preview)
The user accounts must be [hybrid user identities](../active-directory/hybrid/wh
To assign Azure Role-Based Access Control (RBAC) permissions for the Azure file share to a user group, you must create the group in Active Directory and sync it to Azure AD.
+You must disable multi-factor authentication (MFA) on the Azure AD app representing the storage account.
+ > [!IMPORTANT] > This feature is currently only supported in the Azure Public cloud.
You can configure the API permissions from the [Azure portal](https://portal.azu
11. Select **Add permissions** at the bottom of the page. 12. Select **Grant admin consent for "DirectoryName"**.
+### Disable multi-factor authentication on the storage account
+
+Azure AD Kerberos doesn't support using MFA to access Azure Files shares configured with Azure AD Kerberos. You must exclude the Azure AD app representing your storage account from your MFA conditional access policies if they apply to all apps. The storage account app should have the same name as the storage account in the conditional access exclusion list.
+
+> [!IMPORTANT]
+> If you don't exclude MFA policies from the storage account app, the FSLogix profiles won't be able to attach. Trying to map the file share using *net use* will result in an error message that says "System error 1327: Account restrictions are preventing this user from signing in. For example: blank passwords aren't allowed, sign-in times are limited, or a policy restriction has been enforced."
+ ## Configure your Azure Files share To get started, [create an Azure Files share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) under your storage account to store your FSLogix profiles if you haven't already.
virtual-machines Dcv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv3-series.md
Dcsv3-series instances run on a 3rd Generation Intel&reg; Xeon Scalable Processo
- [Memory Preserving Updates](maintenance-and-updates.md): Not supported - [VM Generation Support](generation-2.md): Generation 2 - [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported-- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported for DCdsv3-series
- [Ultra-Disk Storage](disks-enable-ultra-ssd.md): Supported - [Azure Kubernetes Service](../aks/intro-kubernetes.md): Supported (CLI provisioning only) - [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
- [Ephemeral OS Disks](ephemeral-os-disks.md): Not supported - Nested virtualization: Supported
-| Size | vCPU | Memory: GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
-| | | | | | | | | |
-| Standard_E2bs_v5 | 2 | 16 | 4 | 9000/125 | 5500/156 | 10000/1200 | 2 | 10000 |
-| Standard_E4bs_v5 | 4 | 32 | 8 | 19000/250 | 11000/350 | 20000/1200 | 2 | 10000 |
-| Standard_E8bs_v5 | 8 | 64 | 16 | 38000/500 | 22000/625 | 40000/1200 | 4 | 10000 |
-| Standard_E16bs_v5 | 16 | 128 | 32 | 75000/1000 | 44000/1250 | 64000/2000 | 8 | 12500
-| Standard_E32bs_v5 | 32 | 256 | 32 | 150000/1250 | 88000/2500 | 120000/4000 | 8 | 16000 |
-| Standard_E48bs_v5 | 48 | 384 | 32 | 225000/2000 | 120000/4000 | 120000/4000 | 8 | 16000 |
-| Standard_E64bs_v5 | 64 | 512 | 32 | 300000/4000 | 120000/4000 | 120000/4000 | 8 | 20000 |
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
+| | | | | | | | |
+| Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 2 | 10000 |
+| Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 2 | 10000 |
+| Standard_E8bs_v5 | 8 | 64 | 16 | 22000/625 | 40000/1200 | 4 | 10000 |
+| Standard_E16bs_v5 | 16 | 128 | 32 | 44000/1250 | 64000/2000 | 8 | 12500
+| Standard_E32bs_v5 | 32 | 256 | 32 | 88000/2500 | 120000/4000 | 8 | 16000 |
+| Standard_E48bs_v5 | 48 | 384 | 32 | 120000/4000 | 120000/4000 | 8 | 16000 |
+| Standard_E64bs_v5 | 64 | 512 | 32 | 120000/4000 | 120000/4000 | 8 | 20000 |
> [!NOTE] > Accelerated networking is required and turned on by default on all Ebsv5 VMs.
virtual-machines Prepare For Upload Vhd Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/prepare-for-upload-vhd-image.md
# Prepare a Windows VHD or VHDX to upload to Azure
-**Applies to:** :heavy_check_mark: Windows VMs
+**Applies to:** :heavy_check_mark: Windows VMs
Before you upload a Windows virtual machine (VM) from on-premises to Azure, you must prepare the virtual hard disk (VHD or VHDX). Azure supports both generation 1 and generation 2 VMs that are in
After the SFC scan completes, install Windows Updates and restart the computer.
1. For VMs with legacy operating systems (Windows Server 2012 R2 or Windows 8.1 and below), make sure the latest Hyper-V Integration Component Services are installed. For more information, see [Hyper-V integration components update for Windows VM](https://support.microsoft.com/topic/hyper-v-integration-components-update-for-windows-virtual-machines-8a74ffad-576e-d5a0-5a2f-d6fb2594f990). > [!NOTE]
-> In a scenario where VMs are to be set up with a disaster recovery solution between the on-premise VMware server and Azure, the Hyper-V Integration Component Services can't be used. If thatΓÇÖs the case, please contact the VMware support to migrate the VM to Azure and make it co-reside in VMware server.
+> In a scenario where VMs are to be set up with a disaster recovery solution between the on-premises VMware server and Azure, the Hyper-V Integration Component Services can't be used. If thatΓÇÖs the case, please contact the VMware support to migrate the VM to Azure and make it co-reside in VMware server.
## Check the Windows services
Make sure the VM is healthy, secure, and RDP accessible:
```powershell netstat.exe -anob ```
-
+ The following is an example. ```powershell
In particular, Sysprep requires the drives to be fully decrypted before executio
### Generalize a VHD >[!NOTE]
-> If you're creating a generalized image from an existing Azure VM, we recommend to remove the VM extensions
+> If you're creating a generalized image from an existing Azure VM, we recommend to remove the VM extensions
> before running the sysprep. >[!NOTE]
Use one of the methods in this section to convert and resize your virtual disk t
1. Resize the virtual disk to meet Azure requirements: 1. Disks in Azure must have a virtual size aligned to 1 MiB. If your VHD is a fraction of 1 MiB, you'll need to resize the disk to a multiple of 1 MiB. Disks that are fractions of a MiB cause errors when creating images from the uploaded VHD. To verify the size you can use the PowerShell [Get-VHD](/powershell/module/hyper-v/get-vhd) cmdlet to show "Size", which must be a multiple of 1 MiB in Azure, and "FileSize", which will be equal to "Size" plus 512 bytes for the VHD footer.
-
+ ```powershell $vhd = Get-VHD -Path C:\test\MyNewVM.vhd $vhd.Size % 1MB
Use one of the methods in this section to convert and resize your virtual disk t
$vhd.FileSize - $vhd.Size 512 ```
-
- 1. The maximum size allowed for the OS VHD with a generation 1 VM is 2,048 GiB (2 TiB),
+
+ 1. The maximum size allowed for the OS VHD with a generation 1 VM is 2,048 GiB (2 TiB),
1. The maximum size for a data disk is 32,767 GiB (32 TiB). > [!NOTE]
-> - If you are preparing a Windows OS disk after you convert to a fixed disk and resize if needed, create a VM that uses the disk. Start and sign in to the VM and continue with the sections in this article to finish preparing it for uploading.
+> - If you are preparing a Windows OS disk after you convert to a fixed disk and resize if needed, create a VM that uses the disk. Start and sign in to the VM and continue with the sections in this article to finish preparing it for uploading.
> - If you are preparing a data disk you may stop with this section and proceed to uploading your disk. ### Use Hyper-V Manager to convert the disk
virtual-machines Configure Azure Oci Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-azure-oci-networking.md
To create an [integrated multi-cloud experience](oracle-oci-overview.md), Microsoft and Oracle offer direct interconnection between Azure and Oracle Cloud Infrastructure (OCI) through [ExpressRoute](../../../expressroute/expressroute-introduction.md) and [FastConnect](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/fastconnectoverview.htm). Through the ExpressRoute and FastConnect interconnection, customers can experience low latency, high throughput, private direct connectivity between the two clouds. > [!IMPORTANT]
-> Oracle will certify these applications to run in Azure when using the Azure / Oracle Cloud interconnect solution by May 2020.
+> Oracle has certified these applications to run in Azure when using the Azure / Oracle Cloud interconnect solution:
> * E-Business Suite > * JD Edwards EnterpriseOne > * PeopleSoft
virtual-machines Businessobjects Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/businessobjects-deployment-guide.md
In below figure, architecture of large-scale deployment of SAP BOBI Platform on
> SMB Protocol for Azure Files is generally available, but NFS Protocol support for Azure Files is currently in preview. For more information, see [NFS 4.1 support for Azure Files is now in preview](https://azure.microsoft.com/blog/nfs-41-support-for-azure-files-is-now-in-preview/) - CMS & audit database
-
+ SAP BOBI Platform requires a database to store its system data, which is referred as CMS database. It's used to store BI platform information such as user, server, folder, document, configuration, and authentication details. Azure offers [MySQL Database](https://azure.microsoft.com/services/mysql/) and [Azure SQL database](https://azure.microsoft.com/services/sql-database/) Database-as-a-Service (DBaaS) offering that can be used for CMS database and Audit database. As this being a PaaS offering, customers don't have to worry about operation, availability, and maintenance of the databases. Customer can also choose their own database for CMS and Audit repository based on their business need.
Azure SQL Database offers the following three purchasing models:
- Serverless The serverless model automatically scales compute based on workload demand, and bills for the amount of compute used per second. The serverless compute tier automatically pauses databases during inactive periods when only storage is billed, and automatically resumes databases when activity returns. For more information, refer [Resource options and limits](/azure/azure-sql/database/resource-limits-vcore-single-databases#general-purposeserverless-computegen5).
-
+ It's more suitable for intermittent, unpredictable usage with low average compute utilization over time. So this model can be used for non-production SAP BOBI deployment. > [!Note]
Azure Storage has different Storage types available for customers and details fo
### Networking
-SAP BOBI is a reporting and analytics BI platform that doesnΓÇÖt hold any business data. So the system is connected to other database servers from where it fetches all the data and provide insight to users. Azure provides a network infrastructure, which allows the mapping of all scenarios that can be realized with SAP BI Platform like connecting to on-premise system, systems in different virtual network and others. For more information check [Microsoft Azure Networking for SAP Workload](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/workloads/sap/planning-guide.md#microsoft-azure-networking).
+SAP BOBI is a reporting and analytics BI platform that doesnΓÇÖt hold any business data. So the system is connected to other database servers from where it fetches all the data and provide insight to users. Azure provides a network infrastructure, which allows the mapping of all scenarios that can be realized with SAP BI Platform like connecting to on-premises system, systems in different virtual network and others. For more information check [Microsoft Azure Networking for SAP Workload](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/workloads/sap/planning-guide.md#microsoft-azure-networking).
For Database-as-a-Service offering, any newly created database (Azure SQL Database or Azure Database for MySQL) has a firewall that blocks all external connections. To allow access to the DBaaS service from BI Platform virtual machines, you need to specify one or more server-level firewall rules to enable access to your DBaaS server. For more information, see [Firewall rules](../../../mysql/concepts-firewall-rules.md) for Azure Database for MySQL and [Network Access Controls](/azure/azure-sql/database/network-access-controls-overview) section for Azure SQL database.
virtual-machines High Availability Guide Rhel Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-multi-sid.md
Title: Azure VMs high availability for SAP NW on RHEL multi-SID guide | Microsoft Docs
-description: Establish high availability for SAP NW on Azure virtual machines (VMs) RHEL multi-SID.
+ Title: Azure VMs high availability for SAP NW on RHEL multi-SID
+description: Learn how to deploy SAP NetWeaver highly available systems in a two node cluster on Azure VMs with Red Hat Enterprise Linux for SAP applications.
documentationcenter: saponazure
editor: ''
tags: azure-resource-manager keywords: '' -++ vm-windows Previously updated : 03/28/2022 Last updated : 06/13/2022
-# High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux for SAP applications multi-SID guide
+# High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux for SAP applications multi-SID
[dbms-guide]:dbms-guide.md [deployment-guide]:deployment-guide.md
[sap-hana-ha]:sap-hana-high-availability-rhel.md [glusterfs-ha]:high-availability-guide-rhel-glusterfs.md
-This article describes how to deploy multiple SAP NetWeaver highly available systems(that is, multi-SID) in a two node cluster on Azure VMs with Red Hat Enterprise Linux for SAP applications.
+This article describes how to deploy multiple SAP NetWeaver highly available systems (multi-SID) in a two node cluster on Azure VMs with Red Hat Enterprise Linux for SAP applications.
-In the example configurations, installation commands etc. three SAP NetWeaver 7.50 systems are deployed in a single, two node high availability cluster. The SAP systems SIDs are:
-* **NW1**: ASCS instance number **00** and virtual host name **msnw1ascs**; ERS instance number **02** and virtual host name **msnw1ers**.
-* **NW2**: ASCS instance number **10** and virtual hostname **msnw2ascs**; ERS instance number **12** and virtual host name **msnw2ers**.
-* **NW3**: ASCS instance number **20** and virtual hostname **msnw3ascs**; ERS instance number **22** and virtual host name **msnw3ers**.
+In the example configurations, three SAP NetWeaver 7.50 systems are deployed in a single, two node high availability cluster. The SAP systems SIDs are:
-The article doesn't cover the database layer and the deployment of the SAP NFS shares.
-In the examples in this article, we are using [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-create-volumes.md) volume **sapMSID** for the NFS shares, assuming that the volume is already deployed. We are also assuming, that the Azure NetApp Files volume is deployed with NFSv3 protocol and that the following file paths exist for the cluster resources for the ASCS and ERS instances of SAP systems NW1, NW2 and NW3:
+* `NW1`: ASCS instance number 00 and virtual host name `msnw1ascs`. ERS instance number 02 and virtual host name `msnw1ers`.
+* `NW2`: ASCS instance number 10 and virtual hostname `msnw2ascs`. ERS instance number 12 and virtual host name `msnw2ers`.
+* `NW3`: ASCS instance number 20 and virtual hostname `msnw3ascs`. ERS instance number 22 and virtual host name `msnw3ers`.
-* volume sapMSID (nfs://10.42.0.4/sapmnt<b>NW1</b>)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW1</b>ascs)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW1</b>sys)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW1</b>ers)
-* volume sapMSID (nfs://10.42.0.4/sapmnt<b>NW2</b>)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW2</b>ascs)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW2</b>sys)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW2</b>ers)
-* volume sapMSID (nfs://10.42.0.4/sapmnt<b>NW3</b>)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW3</b>ascs)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW3</b>sys)
-* volume sapMSID (nfs://10.42.0.4/usrsap<b>NW3</b>ers)
+The article doesn't cover the database layer and the deployment of the SAP NFS shares.
-Before you begin, refer to the following SAP Notes and papers first:
+The examples in this article use the [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-create-volumes.md) volume `sapMSID` for the NFS shares, assuming that the volume is already deployed. The examples assume that the Azure NetApp Files volume is deployed with NFSv3 protocol. They use the following file paths for the cluster resources for the ASCS and ERS instances of SAP systems `NW1`, `NW2`, and `NW3`:
+
+* volume sapMSID (nfs://10.42.0.4/sapmnt*NW1*)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW1*ascs)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW1*sys)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW1*ers)
+* volume sapMSID (nfs://10.42.0.4/sapmnt*NW2*)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW2*ascs)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW2*sys)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW2*ers)
+* volume sapMSID (nfs://10.42.0.4/sapmnt*NW3*)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW3*ascs)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW3*sys)
+* volume sapMSID (nfs://10.42.0.4/usrsap*NW3*ers)
+
+Before you begin, refer to the following SAP Notes and papers:
* SAP Note [1928533], which has:
- * List of Azure VM sizes that are supported for the deployment of SAP software
- * Important capacity information for Azure VM sizes
- * Supported SAP software, and operating system (OS) and database combinations
- * Required SAP kernel version for Windows and Linux on Microsoft Azure
-* [Azure NetApp Files documentation][anf-azure-doc]
-* SAP Note [2015553] lists prerequisites for SAP-supported SAP software deployments in Azure.
-* SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
-* SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux
+ * List of Azure VM sizes that are supported for the deployment of SAP software.
+ * Important capacity information for Azure VM sizes.
+ * Supported SAP software, and operating system (OS) and database combinations.
+ * Required SAP kernel version for Windows and Linux on Microsoft Azure.
+* [Azure NetApp Files documentation][anf-azure-doc].
+* SAP Note [2015553] has prerequisites for SAP-supported SAP software deployments in Azure.
+* SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux.
+* SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux.
* SAP Note [2178632] has detailed information about all monitoring metrics reported for SAP in Azure. * SAP Note [2191498] has the required SAP Host Agent version for Linux in Azure. * SAP Note [2243692] has information about SAP licensing on Linux in Azure.
-* SAP Note [1999351] has additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
+* SAP Note [1999351] has more troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
* [SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP Notes for Linux.
-* [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide]
-* [Azure Virtual Machines deployment for SAP on Linux][deployment-guide]
-* [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide]
-* [SAP Netweaver in pacemaker cluster](https://access.redhat.com/articles/3150081)
-* General RHEL documentation
+* [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide].
+* [Azure Virtual Machines deployment for SAP on Linux][deployment-guide].
+* [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide].
+* [SAP Netweaver in pacemaker cluster](https://access.redhat.com/articles/3150081).
+* General RHEL documentation:
* [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index) * [High Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index) * [High Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
Before you begin, refer to the following SAP Notes and papers first:
## Overview
-The virtual machines, that participate in the cluster must be sized to be able to run all resources, in case failover occurs. Each SAP SID can fail over independent from each other in the multi-SID high availability cluster.
+The virtual machines that participate in the cluster must be sized to be able to run all resources in case failover occurs. Each SAP SID can fail over independently from each other in the multi-SID high availability cluster.
-To achieve high availability, SAP NetWeaver requires highly available shares. In this documentation, we present the examples with the SAP shares deployed on [Azure NetApp Files NFS volumes](../../../azure-netapp-files/azure-netapp-files-create-volumes.md). It is also possible to host the shares on highly available [GlusterFS cluster](./high-availability-guide-rhel-glusterfs.md), which can be used by multiple SAP systems.
+To achieve high availability, SAP NetWeaver requires highly available shares. This article shows examples with the SAP shares deployed on [Azure NetApp Files NFS volumes](../../../azure-netapp-files/azure-netapp-files-create-volumes.md). You could instead host the shares on highly available [GlusterFS cluster](./high-availability-guide-rhel-glusterfs.md), which can be used by multiple SAP systems.
-![SAP NetWeaver High Availability overview](./media/high-availability-guide-rhel/ha-rhel-multi-sid.png)
+![Diagram shows S A P NetWeaver High Availability overview with Pacemaker cluster and S A P N F S shares.](./media/high-availability-guide-rhel/ha-rhel-multi-sid.png)
> [!IMPORTANT]
-> The support for multi-SID clustering of SAP ASCS/ERS with Red Hat Linux as guest operating system in Azure VMs is limited to **five** SAP SIDs on the same cluster. Each new SID increases the complexity. A mix of SAP Enqueue Replication Server 1 and Enqueue Replication Server 2 on the same cluster is **not supported**. Multi-SID clustering describes the installation of multiple SAP ASCS/ERS instances with different SIDs in one Pacemaker cluster. Currently multi-SID clustering is only supported for ASCS/ERS.
+> The support for multi-SID clustering of SAP ASCS/ERS with Red Hat Linux as guest operating system in Azure VMs is limited to *five* SAP SIDs on the same cluster. Each new SID increases the complexity. A mix of SAP Enqueue Replication Server 1 and Enqueue Replication Server 2 on the same cluster is not supported. Multi-SID clustering describes the installation of multiple SAP ASCS/ERS instances with different SIDs in one Pacemaker cluster. Currently multi-SID clustering is only supported for ASCS/ERS.
> [!TIP]
-> The multi-SID clustering of SAP ASCS/ERS is a solution with higher complexity. It is more complex to implement. It also involves higher administrative effort, when executing maintenance activities (like OS patching). Before you start the actual implementation, take time to carefully plan out the deployment and all involved components like VMs, NFS mounts, VIPs, load balancer configurations and so on.
+> The multi-SID clustering of SAP ASCS/ERS is a solution with higher complexity. It is more complex to implement. It also involves higher administrative effort, when executing maintenance activities, like OS patching. Before you start the actual implementation, take time to carefully plan out the deployment and all involved components like VMs, NFS mounts, VIPs, load balancer configurations and so on.
-SAP NetWeaver ASCS, SAP NetWeaver SCS and SAP NetWeaver ERS use virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We recommend using [Standard load balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
+SAP NetWeaver ASCS, SAP NetWeaver SCS, and SAP NetWeaver ERS use virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We recommend using [Standard load balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
-* Frontend IP addresses for ASCS: 10.3.1.50 (NW1), 10.3.1.52 (NW2) and 10.3.1.54 (NW3)
-* Frontend IP addresses for ERS: 10.3.1.51 (NW1), 10.3.1.53 (NW2) and 10.3.1.55 (NW3)
-* Probe port 62000 for NW1 ASCS, 62010 for NW2 ASCS and 62020 for NW3 ASCS
-* Probe port 62102 for NW1 ASCS, 62112 for NW2 ASCS and 62122 for NW3 ASCS
+* Frontend IP addresses for ASCS: 10.3.1.50 (NW1), 10.3.1.52 (NW2), and 10.3.1.54 (NW3)
+* Frontend IP addresses for ERS: 10.3.1.51 (NW1), 10.3.1.53 (NW2), and 10.3.1.55 (NW3)
+* Probe port 62000 for NW1 ASCS, 62010 for NW2 ASCS, and 62020 for NW3 ASCS
+* Probe port 62102 for NW1 ASCS, 62112 for NW2 ASCS, and 62122 for NW3 ASCS
> [!IMPORTANT] > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
-> [!Note]
-> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+> [!NOTE]
+> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there is no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
> [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md).
+> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set parameter `net.ipv4.tcp_timestamps` to 0. For more information, see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md).
## SAP shares
-SAP NetWeaver requires shared storage for the transport, profile directory, and so on. For highly available SAP system, it is important to have highly available shares. You will need to decide on the architecture for your SAP shares. One option is to deploy the shares on [Azure NetApp Files NFS volumes](../../../azure-netapp-files/azure-netapp-files-create-volumes.md). With Azure NetApp Files, you will get built-in high availability for the SAP NFS shares.
+SAP NetWeaver requires shared storage for the transport, profile directory, and so on. For highly available SAP system, it's important to have highly available shares. You need to decide on the architecture for your SAP shares. One option is to deploy the shares on [Azure NetApp Files NFS volumes](../../../azure-netapp-files/azure-netapp-files-create-volumes.md). With Azure NetApp Files, you get built-in high availability for the SAP NFS shares.
Another option is to build [GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver](./high-availability-guide-rhel-glusterfs.md), which can be shared between multiple SAP systems. ## Deploy the first SAP system in the cluster
-Now that you have decided on the architecture for the SAP shares, deploy the first SAP system in the cluster, following the corresponding documentation.
+After you decide on the architecture for the SAP shares, deploy the first SAP system in the cluster, following the corresponding documentation.
-* If using Azure NetApp Files NFS volumes, follow [Azure VMs high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md)
-* If using GlusterFS cluster, follow [GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver](./high-availability-guide-rhel-glusterfs.md).
+* If you use Azure NetApp Files NFS volumes, follow [Azure VMs high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md).
+* If you use GlusterFS cluster, follow [GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver](./high-availability-guide-rhel-glusterfs.md).
-The documents listed above will guide you through the steps to prepare the necessary infrastructure, build the cluster, prepare the OS for running the SAP application.
+These articles guide you through the steps to prepare the necessary infrastructure, build the cluster, prepare the OS for running the SAP application.
> [!TIP]
-> Always test the fail over functionality of the cluster, after the first system is deployed, before adding the additional SAP SIDs to the cluster. That way you will know that the cluster functionality works, before adding the complexity of additional SAP systems to the cluster.
+> Always test the failover functionality of the cluster after the first system is deployed, before adding the additional SAP SIDs to the cluster. That way, you know that the cluster functionality works, before adding the complexity of additional SAP systems to the cluster.
+
+## Deploy more SAP systems in the cluster
-## Deploy additional SAP systems in the cluster
+This example assumes that system `NW1` was already deployed in the cluster. This example shows how to deploy SAP systems `NW2` and `NW3` in the cluster.
-In this example, we assume that system **NW1** was already deployed in the cluster. We will show how to deploy in the cluster SAP systems **NW2** and **NW3**.
+The following items are prefixed with:
-The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2.
+* **[A]** Applicable to all nodes
+* **[1]** Only applicable to node 1
+* **[2]** Only applicable to node 2
-### Prerequisites
+### Prerequisites
> [!IMPORTANT]
-> Before following the instructions to deploy additional SAP systems in the cluster, follow the instructions to deploy the first SAP system in the cluster, as there are steps which are only necessary during the first system deployment.
+> Before following the instructions to deploy additional SAP systems in the cluster, deploy the first SAP system in the cluster. There are steps which are only necessary during the first system deployment.
+
+This article assumes that:
-This documentation assumes that:
* The Pacemaker cluster is already configured and running. * At least one SAP system (ASCS / ERS instance) is already deployed and is running in the cluster. * The cluster failover functionality has been tested.
This documentation assumes that:
### Prepare for SAP NetWeaver Installation
-1. Add configuration for the newly deployed system (that is, **NW2**, **NW3**) to the existing Azure Load Balancer, following the instructions [Deploy Azure Load Balancer manually via Azure portal](./high-availability-guide-rhel-netapp-files.md#deploy-linux-manually-via-azure-portal). Adjust the IP addresses, health probe ports, load-balancing rules for your configuration.
+1. Add configuration for the newly deployed system (that is, `NW2` and `NW3`) to the existing Azure Load Balancer, following the instructions [Deploy Azure Load Balancer manually via Azure portal](./high-availability-guide-rhel-netapp-files.md#deploy-linux-manually-via-azure-portal). Adjust the IP addresses, health probe ports, and load-balancing rules for your configuration.
-2. **[A]** Setup name resolution for the additional SAP systems. You can either use DNS server or modify `/etc/hosts` on all nodes. This example shows how to use the `/etc/hosts` file. Adapt the IP addresses and the host names to your environment.
+2. **[A]** Set up name resolution for the more SAP systems. You can either use DNS server or modify */etc/hosts* on all nodes. This example shows how to use the */etc/hosts* file. Adapt the IP addresses and the host names to your environment.
- ```
+ ```cmd
sudo vi /etc/hosts # IP address of the load balancer frontend configuration for NW2 ASCS 10.3.1.52 msnw2ascs
This documentation assumes that:
10.3.1.53 msnw2ers # IP address of the load balancer frontend configuration for NW3 ERS 10.3.1.55 msnw3ers
- ```
+ ```
-3. **[A]** Create the shared directories for the additional **NW2** and **NW3** SAP systems that you are deploying to the cluster.
+3. **[A]** Create the shared directories for the `NW2` and `NW3` SAP systems to deploy to the cluster.
- ```
+ ```cmd
sudo mkdir -p /sapmnt/NW2 sudo mkdir -p /usr/sap/NW2/SYS sudo mkdir -p /usr/sap/NW2/ASCS10
This documentation assumes that:
sudo chattr +i /usr/sap/NW3/ERS22 ```
-4. **[A]** Add the mount entries for the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional SAP systems that you are deploying to the cluster. In this example **NW2** and **NW3**.
+4. **[A]** Add the mount entries for the */sapmnt/SID* and */usr/sap/SID/SYS* file systems for the other SAP systems that you're deploying to the cluster. In this example, it's `NW2` and `NW3`.
- Update file `/etc/fstab` with the file systems for the additional SAP systems that you are deploying to the cluster.
+ Update file `/etc/fstab` with the file systems for the other SAP systems that you're deploying to the cluster.
- * If using Azure NetApp Files, follow the instructions on the [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md#prepare-for-sap-netweaver-installation) page
- * If using GlusterFS cluster, follow the instructions on the [Azure VMs high availability for SAP NW on RHEL](./high-availability-guide-rhel.md#prepare-for-sap-netweaver-installation) page
+ * If using Azure NetApp Files, follow the instructions in [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md#prepare-for-sap-netweaver-installation).
+ * If using GlusterFS cluster, follow the instructions in [Azure VMs high availability for SAP NW on RHEL](./high-availability-guide-rhel.md#prepare-for-sap-netweaver-installation).
### Install ASCS / ERS
-1. Create the virtual IP and health probe cluster resources for the ASCS instances of the additional SAP systems you are deploying to the cluster. The example shown here is for **NW2** and **NW3** ASCS, using NFS on Azure NetApp Files volumes with NFSv3 protocol.
+1. Create the virtual IP and health probe cluster resources for the ASCS instances of the other SAP systems you're deploying to the cluster. This example uses `NW2` and `NW3` ASCS, using NFS on Azure NetApp Files volumes with NFSv3 protocol.
- ```
+ ```cmd
sudo pcs resource create fs_NW2_ASCS Filesystem device='10.42.0.4:/sapMSIDR/usrsapNW2ascs' \ directory='/usr/sap/NW2/ASCS10' fstype='nfs' force_unmount=safe \ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
This documentation assumes that:
--group g-NW3_ASCS ```
- Make sure the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+ Make sure the cluster status is ok and that all resources are started. It's not important on which node the resources are running.
-2. **[1]** Install SAP NetWeaver ASCS
+2. **[1]** Install SAP NetWeaver ASCS.
- Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ASCS. For example, for system **NW2**, the virtual hostname is <b>msnw2ascs</b>, <b>10.3.1.52</b> and the instance number that you used for the probe of the load balancer, for example <b>10</b>. For system **NW3**, the virtual hostname is <b>msnw3ascs</b>, <b>10.3.1.54</b> and the instance number that you used for the probe of the load balancer, for example <b>20</b>. Note down on which cluster node you installed ASCS for each SAP SID.
+ Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ASCS. For example, for system `NW2`, the virtual hostname is `msnw2ascs`, `10.3.1.52`, and the instance number that you used for the probe of the load balancer, for example `10`. For system `NW3`, the virtual hostname is `msnw3ascs`, `10.3.1.54`, and the instance number that you used for the probe of the load balancer, for example `20`. Note down on which cluster node you installed ASCS for each SAP SID.
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.
+ You can use the `sapinst` parameter `SAPINST_REMOTE_ACCESS_USER` to allow a non-root user to connect to sapinst. You can use parameter `SAPINST_USE_HOSTNAME` to install SAP, using virtual host name.
- ```
+ ```cmd
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again sudo firewall-cmd --zone=public --add-port=4237/tcp sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname ```
- If the installation fails to create a subfolder in /usr/sap/**SID**/ASCS**Instance#**, try setting the owner to **sid**adm and group to sapsys of the ASCS**Instance#** and retry.
+ If the installation fails to create a subfolder in */usr/sap/\<SID>/ASCS\<Instance#>*, try setting the owner to \<sid>adm and group to sapsys of the ASCS<Instance#> and retry.
-3. **[1]** Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP system you are deploying to the cluster. The example shown here is for **NW2** and **NW3** ERS, using NFS on Azure NetApp Files volumes with NFSv3 protocol.
+3. **[1]** Create a virtual IP and health-probe cluster resources for the ERS instance of the other SAP system you're deploying to the cluster. This example is for `NW2` and `NW3` ERS, using NFS on Azure NetApp Files volumes with NFSv3 protocol.
- ```
+ ```cmd
sudo pcs resource create fs_NW2_AERS Filesystem device='10.42.0.4:/sapMSIDR/usrsapNW2ers' \ directory='/usr/sap/NW2/ERS12' fstype='nfs' force_unmount=safe \ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
This documentation assumes that:
Make sure the cluster status is ok and that all resources are started.
- Next, make sure that the resources of the newly created ERS group, are running on the cluster node, opposite to the cluster node where the ASCS instance for the same SAP system was installed. For example, if NW2 ASCS was installed on `rhelmsscl1`, then make sure the NW2 ERS group is running on `rhelmsscl2`. You can migrate the NW2 ERS group to `rhelmsscl2` by running the following command for one of the cluster resources in the group:
+ Next, make sure that the resources of the newly created ERS group are running on the cluster node, opposite to the cluster node where the ASCS instance for the same SAP system was installed. For example, if NW2 ASCS was installed on `rhelmsscl1`, then make sure the NW2 ERS group is running on `rhelmsscl2`. You can migrate the NW2 ERS group to `rhelmsscl2` by running the following command for one of the cluster resources in the group:
- ```
- pcs resource move fs_NW2_AERS rhelmsscl2
+ ```cmd
+ pcs resource move fs_NW2_AERS rhelmsscl2
```
-4. **[2]** Install SAP NetWeaver ERS
+4. **[2]** Install SAP NetWeaver ERS.
- Install SAP NetWeaver ERS as root on the other node, using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS. For example for system **NW2**, the virtual host name will be <b>msnw2ers</b>, <b>10.3.1.53</b> and the instance number that you used for the probe of the load balancer, for example <b>12</b>. For system **NW3**, the virtual host name <b>msnw3ers</b>, <b>10.3.1.55</b> and the instance number that you used for the probe of the load balancer, for example <b>22</b>.
+ Install SAP NetWeaver ERS as root on the other node, using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS. For example, for system `NW2`, the virtual host name is `msnw2ers`, `10.3.1.53`, and the instance number that you used for the probe of the load balancer, for example `12`. For system `NW3`, the virtual host name `msnw3ers`, `10.3.1.55`, and the instance number that you used for the probe of the load balancer, for example `22`.
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.
+ You can use the `sapinst` parameter `SAPINST_REMOTE_ACCESS_USER` to allow a non-root user to connect to sapinst. You can use parameter `SAPINST_USE_HOSTNAME` to install SAP, using virtual host name.
- ```
+ ```cmd
# Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again sudo firewall-cmd --zone=public --add-port=4237/tcp sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname ``` > [!NOTE]
- > Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will fail.
+ > Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation fails.
- If the installation fails to create a subfolder in /usr/sap/**NW2**/ERS**Instance#**, try setting the owner to **sid**adm and the group to sapsys of the ERS**Instance#** folder and retry.
+ If the installation fails to create a subfolder in */usr/sap/\<NW2>/ERS\<Instance#>*, try setting the owner to \<sid>adm and the group to sapsys of the ERS<Instance#> folder and retry.
- If it was necessary for you to migrate the ERS group of the newly deployed SAP system to a different cluster node, don't forget to remove the location constraint for the ERS group. You can remove the constraint by running the following command (the example is given for SAP systems **NW2** and **NW3**). Make sure to remove the temporary constraints for the same resource you used in the command to move the ERS cluster group.
+ If it was necessary for you to migrate the ERS group of the newly deployed SAP system to a different cluster node, don't forget to remove the location constraint for the ERS group. You can remove the constraint by running the following command. This example is given for SAP systems `NW2` and `NW3`. Make sure to remove the temporary constraints for the same resource you used in the command to move the ERS cluster group.
- ```
- pcs resource clear fs_NW2_AERS
- pcs resource clear fs_NW3_AERS
+ ```cmd
+ pcs resource clear fs_NW2_AERS
+ pcs resource clear fs_NW3_AERS
```
-5. **[1]** Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example shown below is for NW2. You will need to adapt the ASCS/SCS and ERS profiles for all SAP instances added to the cluster.
-
+5. **[1]** Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP systems. The example shown below is for `NW2`. You need to adapt the ASCS/SCS and ERS profiles for all SAP instances added to the cluster.
+ * ASCS/SCS profile
- ```
- sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs
+ ```cmd
+ sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs
- # Change the restart command to a start command
- #Restart_Program_01 = local $(_EN) pf=$(_PF)
- Start_Program_01 = local $(_EN) pf=$(_PF)
+ # Change the restart command to a start command
+ #Restart_Program_01 = local $(_EN) pf=$(_PF)
+ Start_Program_01 = local $(_EN) pf=$(_PF)
- # Add the keep alive parameter, if using ENSA1
- enque/encni/set_so_keepalive = true
- ```
+ # Add the keep alive parameter, if using ENSA1
+ enque/encni/set_so_keepalive = true
+ ```
- For both ENSA1 and ENSA2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736).
+ For both ENSA1 and ENSA2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736).
* ERS profile
- ```
- sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers
+ ```cmd
+ sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers
- # Change the restart command to a start command
- #Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
- Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
+ # Change the restart command to a start command
+ #Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
+ Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
- # remove Autostart from ERS profile
- # Autostart = 1
- ```
+ # remove Autostart from ERS profile
+ # Autostart = 1
+ ```
-6. **[A]** Update the /usr/sap/sapservices file
+6. **[A]** Update the */usr/sap/sapservices* file.
- To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from `/usr/sap/sapservices` file. The example shown below is for SAP systems **NW2** and **NW3**.
+ To prevent the start of the instances by the *sapinit* startup script, all instances managed by Pacemaker must be commented out from */usr/sap/sapservices* file. The example shown below is for SAP systems `NW2` and `NW3`.
- ```
- # On the node where ASCS was installed, comment out the line for the ASCS instacnes
- #LD_LIBRARY_PATH=/usr/sap/NW2/ASCS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW2/ASCS10/exe/sapstartsrv pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs -D -u nw2adm
- #LD_LIBRARY_PATH=/usr/sap/NW3/ASCS20/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW3/ASCS20/exe/sapstartsrv pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs -D -u nw3adm
+ ```cmd
+ # On the node where ASCS was installed, comment out the line for the ASCS instacnes
+ #LD_LIBRARY_PATH=/usr/sap/NW2/ASCS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW2/ASCS10/exe/sapstartsrv pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs -D -u nw2adm
+ #LD_LIBRARY_PATH=/usr/sap/NW3/ASCS20/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW3/ASCS20/exe/sapstartsrv pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs -D -u nw3adm
- # On the node where ERS was installed, comment out the line for the ERS instacnes
- #LD_LIBRARY_PATH=/usr/sap/NW2/ERS12/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW2/ERS12/exe/sapstartsrv pf=/usr/sap/NW2/ERS12/profile/NW2_ERS12_msnw2ers -D -u nw2adm
- #LD_LIBRARY_PATH=/usr/sap/NW3/ERS22/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW3/ERS22/exe/sapstartsrv pf=/usr/sap/NW3/ERS22/profile/NW3_ERS22_msnw3ers -D -u nw3adm
+ # On the node where ERS was installed, comment out the line for the ERS instacnes
+ #LD_LIBRARY_PATH=/usr/sap/NW2/ERS12/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW2/ERS12/exe/sapstartsrv pf=/usr/sap/NW2/ERS12/profile/NW2_ERS12_msnw2ers -D -u nw2adm
+ #LD_LIBRARY_PATH=/usr/sap/NW3/ERS22/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW3/ERS22/exe/sapstartsrv pf=/usr/sap/NW3/ERS22/profile/NW3_ERS22_msnw3ers -D -u nw3adm
``` 7. **[1]** Create the SAP cluster resources for the newly installed SAP system.
- If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems **NW2** and **NW3** as follows:
+ If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems `NW2` and `NW3` as follows:
- ```
- sudo pcs property set maintenance-mode=true
-
- sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \
- InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
- AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
- op monitor interval=20 on-fail=restart timeout=60 \
- op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW2_ASCS
+ ```cmd
+ sudo pcs property set maintenance-mode=true
- sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000
+ sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \
+ InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
+ op monitor interval=20 on-fail=restart timeout=60 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW2_ASCS
- sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \
- InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
- AUTOMATIC_RECOVER=false IS_ERS=true \
- op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW2_AERS
+ sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000
- sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000
- sudo pcs constraint location rsc_sap_NW2_ASCS10 rule score=2000 runs_ers_NW2 eq 1
- sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS kind=Optional symmetrical=false
+ sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \
+ InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW2_AERS
- sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
- InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
- AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
- op monitor interval=20 on-fail=restart timeout=60 \
- op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW3_ASCS
+ sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000
+ sudo pcs constraint location rsc_sap_NW2_ASCS10 rule score=2000 runs_ers_NW2 eq 1
+ sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS kind=Optional symmetrical=false
- sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000
+ sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
+ InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
+ op monitor interval=20 on-fail=restart timeout=60 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW3_ASCS
- sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \
- InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
- AUTOMATIC_RECOVER=false IS_ERS=true \
- op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW3_AERS
+ sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000
- sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000
- sudo pcs constraint location rsc_sap_NW3_ASCS20 rule score=2000 runs_ers_NW3 eq 1
- sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS kind=Optional symmetrical=false
+ sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \
+ InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW3_AERS
- sudo pcs property set maintenance-mode=false
- ```
+ sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000
+ sudo pcs constraint location rsc_sap_NW3_ASCS20 rule score=2000 runs_ers_NW3 eq 1
+ sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS kind=Optional symmetrical=false
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources for SAP systems **NW2** and **NW3** as follows:
+ sudo pcs property set maintenance-mode=false
+ ```
- ```
- sudo pcs property set maintenance-mode=true
-
- sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \
- InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
- AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000 \
- op monitor interval=20 on-fail=restart timeout=60 \
- op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW2_ASCS
+ SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Beginning with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
+ If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources for SAP systems `NW2` and `NW3` as follows:
- sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000
+ ```cmd
+ sudo pcs property set maintenance-mode=true
- sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \
- InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
- AUTOMATIC_RECOVER=false IS_ERS=true \
- op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW2_AERS
+ sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \
+ InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 \
+ op monitor interval=20 on-fail=restart timeout=60 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW2_ASCS
- sudo pcs resource meta rsc_sap_NW2_ERS12 resource-stickiness=3000
+ sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000
- sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000
- sudo pcs constraint order start g-NW2_ASCS then start g-NW2_AERS kind=Optional symmetrical=false
- sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS kind=Optional symmetrical=false
+ sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \
+ InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW2_AERS
- sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
- InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
- AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000 \
- op monitor interval=20 on-fail=restart timeout=60 \
- op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW3_ASCS
+ sudo pcs resource meta rsc_sap_NW2_ERS12 resource-stickiness=3000
- sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000
+ sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000
+ sudo pcs constraint order start g-NW2_ASCS then start g-NW2_AERS kind=Optional symmetrical=false
+ sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS kind=Optional symmetrical=false
- sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \
- InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
- AUTOMATIC_RECOVER=false IS_ERS=true \
- op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW3_AERS
+ sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
+ InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 \
+ op monitor interval=20 on-fail=restart timeout=60 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW3_ASCS
- sudo pcs resource meta rsc_sap_NW3_ERS22 resource-stickiness=3000
+ sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000
- sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000
- sudo pcs constraint order start g-NW3_ASCS then start g-NW3_AERS kind=Optional symmetrical=false
- sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS kind=Optional symmetrical=false
+ sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \
+ InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW3_AERS
- sudo pcs property set maintenance-mode=false
- ```
+ sudo pcs resource meta rsc_sap_NW3_ERS22 resource-stickiness=3000
- If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+ sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000
+ sudo pcs constraint order start g-NW3_ASCS then start g-NW3_AERS kind=Optional symmetrical=false
+ sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS kind=Optional symmetrical=false
+
+ sudo pcs property set maintenance-mode=false
+ ```
+
+ If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
> [!NOTE]
- > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
+ > The timeouts in the above configuration are just examples and might need to be adapted to the specific SAP setup.
- Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
- The following example shows the cluster resources status, after SAP systems **NW2** and **NW3** were added to the cluster.
+ Make sure that the cluster status is ok and that all resources are started. It's not important on which node the resources are running.
+ The following example shows the cluster resources status, after SAP systems `NW2` and `NW3` were added to the cluster.
- ```
- sudo pcs status
+ ```cmd
+ sudo pcs status
Online: [ rhelmsscl1 rhelmsscl2 ]
This documentation assumes that:
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1 ```
-8. **[A]** Add firewall rules for ASCS and ERS on both nodes. The example below shows the firewall rules for both SAP systems **NW2** and **NW3**.
-
- ```
- # NW2 - ASCS
- sudo firewall-cmd --zone=public --add-port=62010/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62010/tcp
- sudo firewall-cmd --zone=public --add-port=3210/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3210/tcp
- sudo firewall-cmd --zone=public --add-port=3610/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3610/tcp
- sudo firewall-cmd --zone=public --add-port=3910/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3910/tcp
- sudo firewall-cmd --zone=public --add-port=8110/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=8110/tcp
- sudo firewall-cmd --zone=public --add-port=51013/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51013/tcp
- sudo firewall-cmd --zone=public --add-port=51014/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51014/tcp
- sudo firewall-cmd --zone=public --add-port=51016/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51016/tcp
- # NW2 - ERS
- sudo firewall-cmd --zone=public --add-port=62112/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62112/tcp
- sudo firewall-cmd --zone=public --add-port=3212/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3212/tcp
- sudo firewall-cmd --zone=public --add-port=3312/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3312/tcp
- sudo firewall-cmd --zone=public --add-port=51213/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51213/tcp
- sudo firewall-cmd --zone=public --add-port=51214/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51214/tcp
- sudo firewall-cmd --zone=public --add-port=51216/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51216/tcp
- # NW3 - ASCS
- sudo firewall-cmd --zone=public --add-port=62020/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62020/tcp
- sudo firewall-cmd --zone=public --add-port=3220/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3220/tcp
- sudo firewall-cmd --zone=public --add-port=3620/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3620/tcp
- sudo firewall-cmd --zone=public --add-port=3920/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3920/tcp
- sudo firewall-cmd --zone=public --add-port=8120/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=8120/tcp
- sudo firewall-cmd --zone=public --add-port=52013/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52013/tcp
- sudo firewall-cmd --zone=public --add-port=52014/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52014/tcp
- sudo firewall-cmd --zone=public --add-port=52016/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52016/tcp
- # NW3 - ERS
- sudo firewall-cmd --zone=public --add-port=62122/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62122/tcp
- sudo firewall-cmd --zone=public --add-port=3222/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3222/tcp
- sudo firewall-cmd --zone=public --add-port=3322/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3322/tcp
- sudo firewall-cmd --zone=public --add-port=52213/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52213/tcp
- sudo firewall-cmd --zone=public --add-port=52214/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52214/tcp
- sudo firewall-cmd --zone=public --add-port=52216/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52216/tcp
+8. **[A]** Add firewall rules for ASCS and ERS on both nodes. The example below shows the firewall rules for both SAP systems `NW2` and `NW3`.
+
+ ```cmd
+ # NW2 - ASCS
+ sudo firewall-cmd --zone=public --add-port=62010/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=62010/tcp
+ sudo firewall-cmd --zone=public --add-port=3210/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3210/tcp
+ sudo firewall-cmd --zone=public --add-port=3610/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3610/tcp
+ sudo firewall-cmd --zone=public --add-port=3910/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3910/tcp
+ sudo firewall-cmd --zone=public --add-port=8110/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=8110/tcp
+ sudo firewall-cmd --zone=public --add-port=51013/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=51013/tcp
+ sudo firewall-cmd --zone=public --add-port=51014/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=51014/tcp
+ sudo firewall-cmd --zone=public --add-port=51016/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=51016/tcp
+ # NW2 - ERS
+ sudo firewall-cmd --zone=public --add-port=62112/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=62112/tcp
+ sudo firewall-cmd --zone=public --add-port=3212/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3212/tcp
+ sudo firewall-cmd --zone=public --add-port=3312/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3312/tcp
+ sudo firewall-cmd --zone=public --add-port=51213/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=51213/tcp
+ sudo firewall-cmd --zone=public --add-port=51214/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=51214/tcp
+ sudo firewall-cmd --zone=public --add-port=51216/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=51216/tcp
+ # NW3 - ASCS
+ sudo firewall-cmd --zone=public --add-port=62020/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=62020/tcp
+ sudo firewall-cmd --zone=public --add-port=3220/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3220/tcp
+ sudo firewall-cmd --zone=public --add-port=3620/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3620/tcp
+ sudo firewall-cmd --zone=public --add-port=3920/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3920/tcp
+ sudo firewall-cmd --zone=public --add-port=8120/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=8120/tcp
+ sudo firewall-cmd --zone=public --add-port=52013/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=52013/tcp
+ sudo firewall-cmd --zone=public --add-port=52014/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=52014/tcp
+ sudo firewall-cmd --zone=public --add-port=52016/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=52016/tcp
+ # NW3 - ERS
+ sudo firewall-cmd --zone=public --add-port=62122/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=62122/tcp
+ sudo firewall-cmd --zone=public --add-port=3222/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3222/tcp
+ sudo firewall-cmd --zone=public --add-port=3322/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3322/tcp
+ sudo firewall-cmd --zone=public --add-port=52213/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=52213/tcp
+ sudo firewall-cmd --zone=public --add-port=52214/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=52214/tcp
+ sudo firewall-cmd --zone=public --add-port=52216/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=52216/tcp
``` ### Proceed with the SAP installation Complete your SAP installation by:
-* [Preparing your SAP NetWeaver application servers](./high-availability-guide-rhel-netapp-files.md#2d6008b0-685d-426c-b59e-6cd281fd45d7)
-* [Installing a DBMS instance](./high-availability-guide-rhel-netapp-files.md#install-database)
-* [Installing A primary SAP application server](./high-availability-guide-rhel-netapp-files.md#sap-netweaver-application-server-installation)
-* Installing one or more additional SAP application instances
+* [Preparing your SAP NetWeaver application servers](./high-availability-guide-rhel-netapp-files.md#2d6008b0-685d-426c-b59e-6cd281fd45d7).
+* [Installing a DBMS instance](./high-availability-guide-rhel-netapp-files.md#install-database).
+* [Installing A primary SAP application server](./high-availability-guide-rhel-netapp-files.md#sap-netweaver-application-server-installation).
+* Installing one or more other SAP application instances.
## Test the multi-SID cluster setup
-The following tests are a subset of the test cases in the best practices guides of Red Hat. They are included for your convenience. For the full list of cluster tests, reference the following documentation:
+The following tests are a subset of the test cases in the best practices guides of Red Hat. They're included for your convenience. For the full list of cluster tests, reference the following documentation:
-* If using Azure NetApp Files NFS volumes, follow [Azure VMs high availability for SAP NetWeaver on RHEL with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md)
-* If using highly available `GlusterFS`, follow [Azure VMs high availability for SAP NetWeaver on RHEL for SAP applications](./high-availability-guide-rhel.md).
+* If you use Azure NetApp Files NFS volumes, follow [Azure VMs high availability for SAP NetWeaver on RHEL with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md)
+* If you use highly available `GlusterFS`, follow [Azure VMs high availability for SAP NetWeaver on RHEL for SAP applications](./high-availability-guide-rhel.md).
-Always read the Red Hat best practices guides and perform all additional tests that might have been added.
-The tests that are presented are in a two node, multi-SID cluster with three SAP systems installed.
+Always read the Red Hat best practices guides and perform all other tests that might have been added. The tests that are presented are in a two-node, multi-SID cluster with three SAP systems installed.
1. Manually migrate the ASCS instance. The example shows migrating the ASCS instance for SAP system NW3. Resource state before starting the test:
- ```
- Online: [ rhelmsscl1 rhelmsscl2 ]
-
- Full list of resources:
-
- rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1
- Resource Group: g-NW1_ASCS
- fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW1_AERS
- fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW2_ASCS
- fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW2_AERS
- fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW3_ASCS
- fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW3_AERS
- fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ ```cmd
+ Online: [ rhelmsscl1 rhelmsscl2 ]
+
+ Full list of resources:
+
+ rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1
+ Resource Group: g-NW1_ASCS
+ fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW1_AERS
+ fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW2_ASCS
+ fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW2_AERS
+ fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW3_ASCS
+ fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW3_AERS
+ fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
``` Run the following commands as root to migrate the NW3 ASCS instance.
- ```
- pcs resource move rsc_sap_NW3_ASCS200
- # Clear temporary migration constraints
- pcs resource clear rsc_sap_NW3_ASCS20
+ ```cmd
+ pcs resource move rsc_sap_NW3_ASCS200
+ # Clear temporary migration constraints
+ pcs resource clear rsc_sap_NW3_ASCS20
- # Remove failed actions for the ERS that occurred as part of the migration
- pcs resource cleanup rsc_sap_NW3_ERS22
+ # Remove failed actions for the ERS that occurred as part of the migration
+ pcs resource cleanup rsc_sap_NW3_ERS22
``` Resource state after the test:
+ ```cmd
+ Online: [ rhelmsscl1 rhelmsscl2 ]
+
+ Full list of resources:
+
+ rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1
+ Resource Group: g-NW1_ASCS
+ fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW1_AERS
+ fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW2_ASCS
+ fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW2_AERS
+ fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW3_ASCS
+ fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW3_AERS
+ fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
```
- Online: [ rhelmsscl1 rhelmsscl2 ]
- Full list of resources:
-
- rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1
- Resource Group: g-NW1_ASCS
- fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW1_AERS
- fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW2_ASCS
- fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW2_AERS
- fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW3_ASCS
- fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW3_AERS
- fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- ```
-
-1. Simulate node crash
+1. Simulate node crash.
Resource state before starting the test:
+ ```cmd
+ Online: [ rhelmsscl1 rhelmsscl2 ]
+
+ Full list of resources:
+
+ rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1
+ Resource Group: g-NW1_ASCS
+ fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW1_AERS
+ fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW2_ASCS
+ fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW2_AERS
+ fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW3_ASCS
+ fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW3_AERS
+ fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
```
- Online: [ rhelmsscl1 rhelmsscl2 ]
- Full list of resources:
+ Run the following command as root on a node where at least one ASCS instance is running. This example runs the command on `rhelmsscl1`, where the ASCS instances for `NW1`, `NW2`, and `NW3` are running.
- rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1
- Resource Group: g-NW1_ASCS
- fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW1_AERS
- fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW2_ASCS
- fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW2_AERS
- fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW3_ASCS
- fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW3_AERS
- fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- ```
-
- Run the following command as root on a node, where at least one ASCS instance is running. In this example, we executed the command on `rhelmsscl1`, where the ASCS instances for NW1, NW2 and NW3 are running.
-
- ```
+ ```cmd
echo c > /proc/sysrq-trigger ```
- The status after the test, and after the node, that was crashed has started again, should look like this.
-
- ```
- Full list of resources:
-
- rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl2
- Resource Group: g-NW1_ASCS
- fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW1_AERS
- fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW2_ASCS
- fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW2_AERS
- fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW3_ASCS
- fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW3_AERS
- fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ The status after the test and after the node that was crashed has started again, should look like these results:
+
+ ```cmd
+ Full list of resources:
+
+ rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl2
+ Resource Group: g-NW1_ASCS
+ fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW1_AERS
+ fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW2_ASCS
+ fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW2_AERS
+ fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ Resource Group: g-NW3_ASCS
+ fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ Resource Group: g-NW3_AERS
+ fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
``` If there are messages for failed resources, clean the status of the failed resources. For example:
- ```
+ ```cmd
pcs resource cleanup rsc_sap_NW1_ERS02 ```
The tests that are presented are in a two node, multi-SID cluster with three SAP
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
+
+To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha].
virtual-machines Sap High Availability Infrastructure Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-shared-disk.md
Set-ClusterQuorum ΓÇôCloudWitness ΓÇôAccountName $AzureStorageAccountName -Acces
After you successfully install the Windows failover cluster, you need to adjust some thresholds, to be suitable for clusters deployed in Azure. The parameters to be changed are documented in [Tuning failover cluster network thresholds](https://techcommunity.microsoft.com/t5/Failover-Clustering/Tuning-Failover-Cluster-Network-Thresholds/ba-p/371834). Assuming that your two VMs that make up the Windows cluster configuration for ASCS/SCS are in the same subnet, change the following parameters to these values: - SameSubNetDelay = 2000 - SameSubNetThreshold = 15-- RoutingHistoryLength = 30
+- RouteHistoryLength = 30
These settings were tested with customers and offer a good compromise. They are resilient enough, but they also provide failover that is fast enough for real error conditions in SAP workloads or VM failure.
After you install SIOS DataKeeper on both nodes, start the configuration. The go
## Next steps
-* [Install SAP NetWeaver HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS instance][sap-high-availability-installation-wsfc-shared-disk]
+* [Install SAP NetWeaver HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS instance][sap-high-availability-installation-wsfc-shared-disk]
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
# SAP workload on Azure virtual machine supported scenarios
-Designing SAP NetWeaver, Business one, `Hybris` or S/4HANA systems architecture in Azure opens many different opportunities for various architectures and tools to use to get to a scalable, efficient, and highly available deployment. Though dependent on the operating system or DBMS used, there are restrictions. Also, not all scenarios that are supported on-premises are supported in the same way in Azure. This document will lead through the supported non-high-availability configurations and high-availability configurations and architectures using Azure VMs exclusively. For scenarios supported with [HANA Large Instances](./hana-overview-architecture.md), check the article [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
+Designing SAP NetWeaver, Business one, `Hybris` or S/4HANA systems architecture in Azure opens many different opportunities for various architectures and tools to use to get to a scalable, efficient, and highly available deployment. Though dependent on the operating system or DBMS used, there are restrictions. Also, not all scenarios that are supported on-premises are supported in the same way in Azure. This document will lead through the supported non-high-availability configurations and high-availability configurations and architectures using Azure VMs exclusively. For scenarios supported with [HANA Large Instances](./hana-overview-architecture.md), check the article [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
## 2-Tier configuration
A graphical representation of such a configuration can look like:
![Simple 2-Tier configuration](./media/sap-planning-supported-configurations/two-tier-simple-configuration.png) Such configurations are supported with Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of SQL Server, Oracle, Db2, maxDB, and SAP ASE for production and non-production cases. For SAP HANA as DBMS, such type of configurations is supported for non-production cases only. This includes the deployment case of [Azure HANA Large Instances](./hana-overview-architecture.md) as well.
-For all OS/DBMS combinations supported on Azure, this type of configuration is supported. However, it is mandatory that you set the configuration of the DBMS and the SAP components in a way that DBMS and SAP components don't compete for memory and CPU resources and thereby exceed the physical available resources. This needs to be done by restricting the memory the DBMS is allowed to allocate. You also need to limit the SAP Extended Memory on application instances. You also need to monitor CPU consumption of the VM overall to make sure that the components are not maximizing the CPU resources.
+For all OS/DBMS combinations supported on Azure, this type of configuration is supported. However, it is mandatory that you set the configuration of the DBMS and the SAP components in a way that DBMS and SAP components don't compete for memory and CPU resources and thereby exceed the physical available resources. This needs to be done by restricting the memory the DBMS is allowed to allocate. You also need to limit the SAP Extended Memory on application instances. You also need to monitor CPU consumption of the VM overall to make sure that the components are not maximizing the CPU resources.
> [!NOTE] > For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as described later in this document ## 3-Tier configuration
-In such configurations, you separate the SAP application layer and the DBMS layer into different VMs. You usually do that for larger systems and out of reasons of being more flexible on the resources of the SAP application layer. In the most simple setup, there is no high availability beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
+In such configurations, you separate the SAP application layer and the DBMS layer into different VMs. You usually do that for larger systems and out of reasons of being more flexible on the resources of the SAP application layer. In the most simple setup, there is no high availability beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
The graphical representation looks like:
This type of DBMS deployment is supported for:
- /b2751fd43bec41a9a14e01913f1edf18.html) Running multiple database instances on one host, you need to make sure that the different instances are not competing for resources and thereby exceed the physical resource limits of the VM. This is especially true for memory where you need to cap the memory anyone of the instances sharing the VM can allocate. That also might be true for the CPU resources the different database instances can consume. All the DBMS mentioned have configurations that allow limiting memory allocation and CPU resources on an instance level.
-In order to have support for such a configuration for Azure VMs, it is expected that the disks or volumes that are used for the data and log/redo log files of the databases managed by the different instances are separate. Or in other words data or log/redo log files of databases managed by different DBMS instance are not supposed to share the same disks or volumes.
+In order to have support for such a configuration for Azure VMs, it is expected that the disks or volumes that are used for the data and log/redo log files of the databases managed by the different instances are separate. Or in other words data or log/redo log files of databases managed by different DBMS instance are not supposed to share the same disks or volumes.
-The disk configuration for HANA Large Instances is delivered configured and is detailed in [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md#single-node-mcos).
+The disk configuration for HANA Large Instances is delivered configured and is detailed in [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md#single-node-mcos).
> [!NOTE] > For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as described later in this document. VMs with multiple DBMS instances are not supported with the high availability configurations described later in this document.
At 3-Tier configuration where multiple SAP dialog instances are run within Azure
For simplification, we did not distinguish between SAP Central Services and SAP dialog instances in the SAP application layer. In this simple 3-Tier configuration, there would be no high availability protection for SAP Central Services. For production systems, it is not recommended to leave SAP Central Services unprotected. For specifics on so called multi-SID configurations around SAP Central Instances and high-availability of such multi-SID configurations, see later sections of this document. ## High Availability protection for the SAP DBMS layer
-As you look to deploy SAP production systems, you need to consider hot standby type of high availability configurations. Especially with SAP HANA, where data needs to be loaded into memory before being able to get the full performance and scalability back, Azure service healing is not an ideal measure for high availability.
+As you look to deploy SAP production systems, you need to consider hot standby type of high availability configurations. Especially with SAP HANA, where data needs to be loaded into memory before being able to get the full performance and scalability back, Azure service healing is not an ideal measure for high availability.
-In general Microsoft supports only high availability configurations and software packages that are described under the SAP workload section in docs.microsoft.com. You can read the same statement in SAP note [#1928533](https://launchpad.support.sap.com/#/notes/1928533). Microsoft will not provide support for other high availability third-party software frameworks that are not documented by Microsoft with SAP workload. In such cases, the third-party supplier of the high availability framework is the supporting party for the high availability configuration who needs to be engaged by you as a customer into the support process. Exceptions are going to be mentioned in this article.
+In general Microsoft supports only high availability configurations and software packages that are described under the SAP workload section in docs.microsoft.com. You can read the same statement in SAP note [#1928533](https://launchpad.support.sap.com/#/notes/1928533). Microsoft will not provide support for other high availability third-party software frameworks that are not documented by Microsoft with SAP workload. In such cases, the third-party supplier of the high availability framework is the supporting party for the high availability configuration who needs to be engaged by you as a customer into the support process. Exceptions are going to be mentioned in this article.
In general Microsoft supports a limited set of high availability configurations on Azure VMs or HANA Large Instances units. For the supported scenarios of HANA Large Instances, read the document [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
For Azure VMs, the following high availability configurations are supported on D
- SAP HANA scale-out n+m configurations using [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) on SUSE and Red Hat. Details are listed in these articles: - [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server}](./sap-hana-scale-out-standby-netapp-files-suse.md) - [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux](./sap-hana-scale-out-standby-netapp-files-rhel.md)-- SQL Server Failover cluster based on Windows Scale-Out File Services. Though recommendation for production systems is to use SQL Server Always On instead of clustering. SQL Server Always On provides better availability using separate storage. Details are described in this article:
+- SQL Server Failover cluster based on Windows Scale-Out File Services. Though recommendation for production systems is to use SQL Server Always On instead of clustering. SQL Server Always On provides better availability using separate storage. Details are described in this article:
- [Configure a SQL Server failover cluster instance on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-storage-spaces-direct-manually-configure) - SQL Server Always On is supported with the Windows operating system for SQL Server on Azure. This is the default recommendation for production SQL Server instances on Azure. Details are described in these articles: - [Introducing SQL Server Always On availability groups on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/availability-group-overview).
For Azure VMs, the following high availability configurations are supported on D
- [Supported scenarios for HANA Large Instances - Host auto failover (1+1)](./hana-supported-scenario.md#host-auto-failover-11) > [!IMPORTANT]
-> For none of the scenarios described above, we support configurations of multiple DBMS instances in one VM. Means in each of the cases, only one database instance can be deployed per VM and protected with the described high availability methods. Protecting multiple DBMS instances under the same Windows or Pacemaker failover cluster is **NOT** supported at this point in time. Also Oracle Data Guard is supported for single instance per VM deployment cases only.
+> For none of the scenarios described above, we support configurations of multiple DBMS instances in one VM. Means in each of the cases, only one database instance can be deployed per VM and protected with the described high availability methods. Protecting multiple DBMS instances under the same Windows or Pacemaker failover cluster is **NOT** supported at this point in time. Also Oracle Data Guard is supported for single instance per VM deployment cases only.
Various database systems allow hosting multiple databases under one DBMS instance. As in the case of SAP HANA, multiple databases can be hosted in multiple database containers (MDC). For cases where these multi-database configurations are working within one failover cluster resource, these configurations are supported. Configurations that are not supported are cases where multiple cluster resources would be required. As for configurations where you would define multiple SQL Server Availability Groups, under one SQL Server instance. ![DBMS HA configuration](./media/sap-planning-supported-configurations/database-high-availability-configuration.png)
-Dependent on the DBMS an/or operating systems, components like Azure load balancer might or might not be required as part of the solution architecture.
+Dependent on the DBMS an/or operating systems, components like Azure load balancer might or might not be required as part of the solution architecture.
Specifically for maxDB, the storage configuration needs to be different. In maxDB the data and log files needs to be located on shared storage for high availability configurations. Only in the case of maxDB, shared storage is supported for high availability. For all other DBMS, separate storage stacks per node are the only supported disk configurations.
Since only a subset of Azure storage types is providing highly available NFS or
- Windows Failover Cluster Server with Windows Scale-out File Server can be deployed on all native Azure storage types, except Azure NetApp Files. However, recommendation is to use Premium Storage due to superior service level agreements in throughput and IOPS. - Windows Failover Cluster Server with SMB on Azure NetApp Files is supported on Azure NetApp Files. SMB shares on Azure File services are **NOT** supported at this point in time. - Windows Failover Cluster Server with windows shared disk based on SIOS `Datakeeper` can be deployed on all native Azure storage types, except Azure NetApp Files. However, recommendation is to use Premium Storage due to superior service level agreements in throughput and IOPS.-- SUSE or Red Hat Pacemaker using NFS shares on Azure NetApp Files is supported on Azure NetApp Files.
+- SUSE or Red Hat Pacemaker using NFS shares on Azure NetApp Files is supported on Azure NetApp Files.
- SUSE Pacemaker using a `drdb` configuration between two VMs is supported using native Azure storage types, except Azure NetApp Files. However, recommendation is to use Premium Storage due to superior service level agreements in throughput and IOPS. - Red Hat Pacemaker using `glusterfs` for providing NFS share is supported using native Azure storage types, except Azure NetApp Files. However, recommendation is to use Premium Storage due to superior service level agreements in throughput and IOPS.
To reduce the number of VMs that are needed in large SAP landscapes, SAP allows
On Azure, a multi-SID cluster configuration is supported for the Windows operating system with ENSA1 and ENSA2. Recommendation is not to combine the older Enqueue Replication Service architecture (ENSA1) with the new architecture (ENSA2) on one multi-SID cluster. Details about such an architecture are documented in the articles -- [SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and shared disk on Azure](./sap-ascs-ha-multi-sid-wsfc-shared-disk.md) -- [SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and file share on Azure](./sap-ascs-ha-multi-sid-wsfc-file-share.md)
+- [SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and shared disk on Azure](./sap-ascs-ha-multi-sid-wsfc-shared-disk.md)
+- [SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and file share on Azure](./sap-ascs-ha-multi-sid-wsfc-file-share.md)
For SUSE, a multi-SID cluster based on Pacemaker is supported as well. So far the configuration is supported for:
A multi-SID cluster with Enqueue Replication server schematically looks like
## SAP HANA scale-out scenarios
-SAP HANA scale-out scenarios are supported for a subset of the HANA certified Azure VMs as listed in the [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). All the VMs marked with 'Yes' in the column 'Clustering' can be used for either OLAP or S/4HANA scale-out. Configurations without standby are supported with the Azure Storage types of:
+SAP HANA scale-out scenarios are supported for a subset of the HANA certified Azure VMs as listed in the [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). All the VMs marked with 'Yes' in the column 'Clustering' can be used for either OLAP or S/4HANA scale-out. Configurations without standby are supported with the Azure Storage types of:
- Azure Premium Storage, including Azure Write accelerator for the /hana/log volume - [Ultra disk](../../disks-enable-ultra-ssd.md)-- [Azure NetApp Files](https://azure.microsoft.com/services/netapp/)
+- [Azure NetApp Files](https://azure.microsoft.com/services/netapp/)
SAP HANA scale-out configurations for OLAP or S/4HANA with standby node(s) are exclusively supported with NFS shared hosted on Azure NetApp Files. For further information on exact storage configurations with or without standby node, check the articles: -- [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
+- [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
- [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md) - [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux](./sap-hana-scale-out-standby-netapp-files-rhel.md) - [SAP support note #2080991](https://launchpad.support.sap.com/#/notes/2080991)
For details of HANA Large Instances supported HANA scale-out configurations, the
## Disaster Recovery Scenario
-There is a variety of disaster recovery scenarios that are supported. We define Disaster architectures as architectures, which should compensate for a complete Azure region going off the grid. This means we need the disaster recovery target to be a different Azure region as target to run your SAP landscape. We separate methods and configurations in DBMS layer and non-DBMS layer.
+There is a variety of disaster recovery scenarios that are supported. We define Disaster architectures as architectures, which should compensate for a complete Azure region going off the grid. This means we need the disaster recovery target to be a different Azure region as target to run your SAP landscape. We separate methods and configurations in DBMS layer and non-DBMS layer.
### DBMS layer For the DBMS layer, configurations using the DBMS native replication mechanisms, like Always On, Oracle Data Guard, Db2 HADR, SAP ASE Always-On, or HANA System Replication are supported. It is mandatory that the replication stream in such cases is asynchronous, instead of synchronous as in typical high availability scenarios that are deployed within a single Azure region. A typical example of such a supported DBMS disaster recovery configuration is described in the article [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md#combine-availability-within-one-region-and-across-regions). The second graphic in that section describes a scenario with HANA as an example. The main databases supported for SAP applications are all able to be deployed in such a scenario.
It is supported to use a smaller VM as target instance in the disaster recovery
- Smaller VM types do not allow that many disks attached than smaller VMs - Smaller VMs have less network and storage throughput - Re-sizing across VM families can be a problem when the Different VMs are collected in one Azure Availability Set or when the re-sizing should happen between the M-Series family and Mv2 family of VMs-- CPU and memory consumption for the database instance being able to receive the stream of changes with minimal delay and enough CPU and memory resources to apply these changes with minimal delay to the data
+- CPU and memory consumption for the database instance being able to receive the stream of changes with minimal delay and enough CPU and memory resources to apply these changes with minimal delay to the data
More details on limitations of different VM sizes can be found on the [VM sizes](../../sizes.md) page
For HANA Large Instance DR scenarios check these documents:
- [Scale-out with DR using HSR](./hana-supported-scenario.md#scale-out-with-dr-using-hsr) > [!NOTE]
-> Usage of [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) has not been tested for DBMS deployments under SAP workload. As a result it is not supported for the DBMS layer of SAP systems at this point in time. Other methods of replications by Microsoft and SAP that are not listed are not supported. Using third party software for replicating the DBMS layer of SAP systems between different Azure Regions, needs to be supported by the vendor of the software and will not be supported through Microsoft and SAP support channels.
-
+> Usage of [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) has not been tested for DBMS deployments under SAP workload. As a result it is not supported for the DBMS layer of SAP systems at this point in time. Other methods of replications by Microsoft and SAP that are not listed are not supported. Using third party software for replicating the DBMS layer of SAP systems between different Azure Regions, needs to be supported by the vendor of the software and will not be supported through Microsoft and SAP support channels.
+ ## Non-DBMS layer For the SAP application layer and eventual shares or storage locations that are needed, the two major scenarios are leveraged by customers: -- The disaster recovery targets in the second Azure region are not being used for any production or non-production purposes. In this scenario, the VMs that function as disaster recovery target are ideally not deployed and the image and changes to the images of the production SAP application layer is replicated to the disaster recovery region. A functionality that can perform such a task is [Azure Site Recovery](../../../site-recovery/azure-to-azure-move-overview.md). Azure Site Recovery support an Azure-to-Azure replication scenario like this.
+- The disaster recovery targets in the second Azure region are not being used for any production or non-production purposes. In this scenario, the VMs that function as disaster recovery target are ideally not deployed and the image and changes to the images of the production SAP application layer is replicated to the disaster recovery region. A functionality that can perform such a task is [Azure Site Recovery](../../../site-recovery/azure-to-azure-move-overview.md). Azure Site Recovery support an Azure-to-Azure replication scenario like this.
- The disaster recovery targets are VMs that are actually in use by non-production systems. The whole SAP landscape is spread across two different Azure regions with production systems usually in one region and non-production systems in another region. In many customer deployments, the customer has a non-production system that is equivalent to a production system. The customer has production application instances pre-installed on the application layer non-production systems. In case of a failover, the non-production instances would be shut down, the virtual names of the production VMs moved to the non-production VMs (after assigning new IP addresses in DNS), and the pre-installed production instances are getting started ### SAP Central Services clusters
There is a list of scenarios, which are not supported for SAP workload on Azure
Other scenarios, which are not supported are scenarios like: - Deployment scenarios that introduce a larger network latency between the SAP application tier and the SAP DBMS tier in SAP's common architecture as shown in NetWeaver, S/4HANA and e.g. `Hybris`. This includes:
- - Deploying one of the tiers on-premise whereas the other tier is deployed in Azure
+ - Deploying one of the tiers on-premises whereas the other tier is deployed in Azure
- Deploying the SAP application tier of a system in a different Azure region than the DBMS tier - Deploying one tier in datacenters that are co-located to Azure and the other tier in Azure, except where such an architecture pattern are provided by an Azure native service - Deploying network virtual appliances between the SAP application tier and the DBMS layer
Other scenarios, which are not supported are scenarios like:
Scenario(s) that we did not test and therefore have no experience with list like: - Azure Site Recovery replicating DBMS layer VMs. As a result, we recommend leveraging the database native asynchronous replication functionality for potential disaster recovery configuration
-
+ ## Next Steps Read next steps in the [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
Read next steps in the [Azure Virtual Machines planning and implementation for S
-
+
virtual-machines Sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-rise-integration.md
Network Security Groups are in effect on both customer and SAP vnet, identically
With an existing customer Azure deployment, on-premise network is already connected through ExpressRoute (ER) or VPN. The same on-premise network path is typically used for SAP RISE/ECS managed workloads. Preferred architecture is to use existing ER/VPN Gateways in customerΓÇÖs hub vnet for this purpose, with connected SAP RISE vnet seen as a spoke network connected to customerΓÇÖs vnet hub. :::image type="complex" source="./media/sap-rise-integration/sap-rise-on-premises.png" alt-text="Example diagram of SAP RISE/ECS as spoke network peered to customer's vnet hub and on-premise.":::
- This diagram shows a typical SAP customer's hub and spoke virtual networks. It's connected to on-premise with a connection. Cross tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. The vnet peering has remote gateway transit enabled, enabling SAP RISE vnet to be accessed from on-premise.
+ This diagram shows a typical SAP customer's hub and spoke virtual networks. It's connected to on-premises with a connection. Cross tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. The vnet peering has remote gateway transit enabled, enabling SAP RISE vnet to be accessed from on-premise.
:::image-end::: With this architecture, central policies and security rules governing network connectivity to customer workloads also apply to SAP RISE/ECS managed workloads. The same on-premise network path is used for both customer's vnets and SAP RISE/ECS vnet.
Again, contact your SAP representative for details and steps needed to establish
Integration of customer owned networks with Cloud-based infrastructure and providing a seamless name resolution concept is a vital part of a successful project implementation.
-This diagram describes one of the common integration scenarios of SAP owned subscriptions, VNets and DNS infrastructure with customerΓÇÖs local network and DNS services. In this setup on-premise DNS servers are holding all DNS entries. The DNS infrastructure is capable to resolve DNS requests coming from all sources (on-premise clients, customerΓÇÖs Azure services and SAP managed environments).
+This diagram describes one of the common integration scenarios of SAP owned subscriptions, VNets and DNS infrastructure with customerΓÇÖs local network and DNS services. In this setup on-premises DNS servers are holding all DNS entries. The DNS infrastructure is capable to resolve DNS requests coming from all sources (on-premises clients, customerΓÇÖs Azure services and SAP managed environments).
[![Diagram shows customer DNS servers are located both within customer's hub vnet as well as SAP RISE vnet, with DNS zone transfer between them.](./media/sap-rise-integration/sap-rise-dns.png)](./media/sap-rise-integration/sap-rise-dns.png#lightbox)
With the information about available interfaces to the SAP RISE/ECS landscape, s
## Integration with self-hosted integration runtime
-Integrating your SAP system with Azure cloud native services such as Azure Data Factory or Azure Synapse would use these communication channels to the SAP RISE/ECS managed environment.
+Integrating your SAP system with Azure cloud native services such as Azure Data Factory or Azure Synapse would use these communication channels to the SAP RISE/ECS managed environment.
The following high-level architecture shows possible integration scenario with Azure data services such as [Data Factory](../../../data-factory/index.yml) or [Synapse Analytics](../../../synapse-analytics/index.yml). For these Azure services either a self-hosted integration runtime (self-hosted IR or IR) or Azure integration runtime (Azure IR) can be used. The use of either integration runtime depends on the [chosen data connector](../../../data-factory/copy-activity-overview.md#supported-data-stores-and-formats), most SAP connectors are only available for the self-hosted IR. [SAP ECC connector](../../../data-factory/connector-sap-ecc.md?tabs=data-factory) is capable of being using through both Azure IR and self-hosted IR. The choice of IR governs the network path taken. SAP .NET connector is used for [SAP table connector](../../../data-factory/connector-sap-ecc.md?tabs=data-factory), [SAP BW](../../../data-factory/connector-sap-business-warehouse.md?tabs=data-factory) and [SAP OpenHub](../../../data-factory/connector-sap-business-warehouse-open-hub.md) connectors alike. All these connectors use SAP function modules (FM) on the SAP system, executed through RFC connections. Last if direct database access has been agreed with SAP, along with users and connection path opened, ODBC/JDBC connector for [SAP HANA](../../../data-factory/connector-sap-hana.md?tabs=data-factory) can be used from the self-hosted IR as well. [![SAP RISE/ECS accessed by Azure ADF or Synapse.](./media/sap-rise-integration/sap-rise-adf-synapse.png)](./media/sap-rise-integration/sap-rise-adf-synapse.png#lightbox)
-For data connectors using the Azure IR, this IR accesses your SAP environment through a public IP address. SAP RISE/ECS provides this endpoint through an application gateway for use and the communication and data movement is through https.
+For data connectors using the Azure IR, this IR accesses your SAP environment through a public IP address. SAP RISE/ECS provides this endpoint through an application gateway for use and the communication and data movement is through https.
-Data connectors within the self-hosted integration runtime communicate with the SAP system within SAP RISE/ECS subscription and vnet through the established vnet peering and private network address only. The established network security group rules limit which application can communicate with the SAP system.
+Data connectors within the self-hosted integration runtime communicate with the SAP system within SAP RISE/ECS subscription and vnet through the established vnet peering and private network address only. The established network security group rules limit which application can communicate with the SAP system.
-The customer is responsible for deployment and operation of the self-hosted integration runtime within their subscription and vnet. The communication between Azure PaaS services such as Data Factory or Synapse Analytics and self-hosted integration runtime is within the customerΓÇÖs subscription. SAP RISE/ECS exposes the communication ports for these applications to use but has no knowledge or support about any details of the connected application or service.
+The customer is responsible for deployment and operation of the self-hosted integration runtime within their subscription and vnet. The communication between Azure PaaS services such as Data Factory or Synapse Analytics and self-hosted integration runtime is within the customerΓÇÖs subscription. SAP RISE/ECS exposes the communication ports for these applications to use but has no knowledge or support about any details of the connected application or service.
> [!Note] > Contact SAP for details on communication paths available to you with SAP RISE and the necessary steps to open them. SAP must also be contacted for any SAP license details for any implications accessing SAP data through any Azure Data Factory or Synapse connectors.
The customer is responsible for deployment and operation of the self-hosted inte
To learn the overall support on SAP data integration scenario, see [SAP data integration using Azure Data Factory whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparison and guidance. ## On-premise data gateway
-Further Azure Services such as [Logic Apps](../../../logic-apps/logic-apps-using-sap-connector.md), [Power Apps](/connectors/saperp/) or [Power BI](/power-bi/connect-data/desktop-sap-bw-connector) communicate and exchange data with SAP systems through an on-premise data gateway. The on-premise data gateway is a virtual machine, running in Azure or on-premise. It provides secure data transfer between these Azure Services and your SAP systems.
+Further Azure Services such as [Logic Apps](../../../logic-apps/logic-apps-using-sap-connector.md), [Power Apps](/connectors/saperp/) or [Power BI](/power-bi/connect-data/desktop-sap-bw-connector) communicate and exchange data with SAP systems through an on-premise data gateway. The on-premise data gateway is a virtual machine, running in Azure or on-premise. It provides secure data transfer between these Azure Services and your SAP systems.
-With SAP RISE, the on-premise data gateway can connect to Azure Services running in customerΓÇÖs Azure subscription. This VM running the data gateway is deployed and operated by the customer. With below high-level architecture as overview, similar method can be used for either service.
+With SAP RISE, the on-premise data gateway can connect to Azure Services running in customerΓÇÖs Azure subscription. This VM running the data gateway is deployed and operated by the customer. With below high-level architecture as overview, similar method can be used for either service.
[![SAP RISE/ECS accessed from Azure on-premise data gateway and connected Azure services.](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png)](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png#lightbox)
-The SAP RISE environment here provides access to the SAP ports for RFC and https described earlier. The communication ports are accessed by the private network address through the vnet peering or VPN site-to-site connection. The on-premise data gateway VM running in customerΓÇÖs Azure subscription uses the [SAP .NET connector](https://support.sap.com/en/product/connectors/msnet.html) to run RFC, BAPI or IDoc calls through the RFC connection. Additionally, depending on service and way the communication is setup, a way to connect to public IP of the SAP systems REST API through https might be required. The https connection to a public IP can be exposed through SAP RISE/ECS managed application gateway. This high level architecture shows the possible integration scenario. Alternatives to it such as using Logic Apps single tenant and [private endpoints](../../../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md) to secure the communication and other can be seen as extension and are not described here in.
+The SAP RISE environment here provides access to the SAP ports for RFC and https described earlier. The communication ports are accessed by the private network address through the vnet peering or VPN site-to-site connection. The on-premise data gateway VM running in customerΓÇÖs Azure subscription uses the [SAP .NET connector](https://support.sap.com/en/product/connectors/msnet.html) to run RFC, BAPI or IDoc calls through the RFC connection. Additionally, depending on service and way the communication is setup, a way to connect to public IP of the SAP systems REST API through https might be required. The https connection to a public IP can be exposed through SAP RISE/ECS managed application gateway. This high level architecture shows the possible integration scenario. Alternatives to it such as using Logic Apps single tenant and [private endpoints](../../../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md) to secure the communication and other can be seen as extension and are not described here in.
-SAP RISE/ECS exposes the communication ports for these applications to use but has no knowledge about any details of the connected application or service running in a customerΓÇÖs subscription.
+SAP RISE/ECS exposes the communication ports for these applications to use but has no knowledge about any details of the connected application or service running in a customerΓÇÖs subscription.
> [!Note] > SAP must be contacted for any SAP license details for any implications accessing SAP data through Azure service connecting to the SAP system or database. ## Azure Monitoring for SAP with SAP RISE
-[Azure Monitoring for SAP](./monitor-sap-on-azure.md) is an Azure-native solution for monitoring your SAP system. It extends the Azure monitor platform monitoring capability with support to gather data about SAP NetWeaver, database, and operating system details.
+[Azure Monitoring for SAP](./monitor-sap-on-azure.md) is an Azure-native solution for monitoring your SAP system. It extends the Azure monitor platform monitoring capability with support to gather data about SAP NetWeaver, database, and operating system details.
> [!Note] > SAP RISE/ECS is a fully managed service for your SAP landscape and thus Azure Monitoring for SAP is not intended to be utilized for such managed environment.
-SAP RISE/ECS doesn't support any integration with Azure Monitoring for SAP. SAP RISE/ECSΓÇÖs own monitoring and reporting is provided to the customer as defined by your service description with SAP.
+SAP RISE/ECS doesn't support any integration with Azure Monitoring for SAP. SAP RISE/ECSΓÇÖs own monitoring and reporting is provided to the customer as defined by your service description with SAP.
## Next steps Check out the documentation:
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
When ready, you can issue the command to have your range advertised from Azure a
* IPv6 is currently not supported for custom IP prefixes.
+* Custom IP prefixes do not currently support derivation of IPs with Internet Routing Preference or that use Global Tier (for cross-region load-balancing).
+ * In regions with [availability zones](../../availability-zones/az-overview.md), a custom IP prefix must be specified as either zone-redundant or assigned to a specific zone. It can't be created with no zone specified in these regions. All IPs from the prefix must have the same zonal properties. * The advertisements of IPs from a custom IP prefix over Azure ExpressRoute aren't currently supported.
-* Once provisioned, custom IP prefix ranges can't be moved to another subscription. Custom IP address prefix ranges can't be moved within resource groups in a single subscription. It's possible to derive a public IP prefix from a custom IP prefix in another subscription with the proper permissions.
-
-* Any IP addresses utilized from a custom IP prefix currently count against the standard public IP quota for a subscription and region. Contact Azure support to have quotas increased when required.
+* Once provisioned, custom IP prefix ranges can't be moved to another subscription. Custom IP address prefix ranges can't be moved within resource groups in a single subscription. It is possible to derive a public IP prefix from a custom IP prefix in another subscription with the proper permissions as described [here](create-custom-ip-address-prefix-powershell.md).
-* IPs brought to Azure cannot currently be used for Windows Server Activation.
+* IPs brought to Azure may have a delay up to 2 weeks before they can be used for Windows Server Activation.
## Pricing
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
The following illustrates this concept as an additional flow to the preceding se
|::|::|::| | 4 | 192.168.0.16:4285 | 65.52.0.2:80 |
-A NAT gateway will translate flow 4 to a source port that may already be in use for other destinations as well. See [Scale NAT gateway](#scale-nat-gateway) for more discussion on correctly sizing your IP address provisioning.
+A NAT gateway will translate flow 4 to a source port that may already be in use for other destinations as well (see flow 1 from table above). See [Scale NAT gateway](#scale-nat-gateway) for more discussion on correctly sizing your IP address provisioning.
| Flow | Source tuple | Source tuple after SNAT | Destination tuple | |::|::|::|::|
virtual-wan Nat Rules Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/nat-rules-vpn-gateway.md
The following diagram shows the projected end result:
1. Ensure the site-to-site VPN gateway is able to peer with the on-premises BGP peer.
- In this example, the **Ingress NAT Rule** will need to translate 10.30.0.132 to 127.30.0.132. In order to do that, click 'Edit VPN site' to configure VPN site Link A BGP address to reflect this translated BGP peer address (127.30.0.132).
+ In this example, the **Ingress NAT Rule** will need to translate 10.30.0.132 to 127.30.0.132. In order to do that, click 'Edit VPN site' to configure VPN site Link A BGP address to reflect this translated BGP peer address (127.30.0.132).
:::image type="content" source="./media/nat-rules-vpn-gateway/edit-site-bgp.png" alt-text="Screenshot showing how to change the BGP peering IP."lightbox="./media/nat-rules-vpn-gateway/edit-site-bgp.png":::
The following diagram shows the projected end result:
For instance, if the on-premises BGP IP address is 10.30.0.133 and there is an **Ingress NAT Rule** that translates 10.30.0.0/24 to 127.30.0.0/24, the VPN site's **Link Connection BGP Address** must be configured to be the translated address (127.30.0.133). * In Dynamic NAT, on-premises BGP peer IP can't be part of the pre-NAT address range (**Internal Mapping**) as IP and port translations aren't fixed. If there is a need to translate the on-premises BGP peering IP, please create a separate **Static NAT Rule** that translates BGP Peering IP address only.
- For instance, if the on-premises network has an address space of 10.0.0.0/24 with an on-premise BGP peer IP of 10.0.0.1 and there is an **Ingress Dynamic NAT Rule** to translate 10.0.0.0/24 to 192.198.0.0/32, a separate **Ingress Static NAT Rule** translating 10.0.0.1/32 to 192.168.0.02/32 is required and the corresponding VPN site's **Link Connection BGP address** must be updated to the NAT-translated address (part of the External Mapping).
+ For instance, if the on-premises network has an address space of 10.0.0.0/24 with an on-premises BGP peer IP of 10.0.0.1 and there is an **Ingress Dynamic NAT Rule** to translate 10.0.0.0/24 to 192.198.0.0/32, a separate **Ingress Static NAT Rule** translating 10.0.0.1/32 to 192.168.0.02/32 is required and the corresponding VPN site's **Link Connection BGP address** must be updated to the NAT-translated address (part of the External Mapping).
### Ingress SNAT (VPN site with statically configured routes)
virtual-wan Work Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/work-remotely-support.md
You can connect to your resources in Azure over an IPsec/IKE (IKEv2) or OpenVPN
You have two options here:
-* Set up Site-to-site connectivity with any existing VPN device. When you connect the IPsec VPN device to Azure Virtual WAN hub, interconnectivity between the Point-to-site User VPN (Remote user) and Site-to-site VPN is automatic. For more information on how to set up Site-to-site VPN from your on-premise VPN device to Azure Virtual WAN, see [Create a site-to-site connection using Virtual WAN](virtual-wan-site-to-site-portal.md).
+* Set up Site-to-site connectivity with any existing VPN device. When you connect the IPsec VPN device to Azure Virtual WAN hub, interconnectivity between the Point-to-site User VPN (Remote user) and Site-to-site VPN is automatic. For more information on how to set up Site-to-site VPN from your on-premises VPN device to Azure Virtual WAN, see [Create a site-to-site connection using Virtual WAN](virtual-wan-site-to-site-portal.md).
* Connect your ExpressRoute circuit to the Virtual WAN hub. Connecting an ExpressRoute circuit requires deploying an ExpressRoute gateway in Virtual WAN. As soon as you have deployed one, interconnectivity between the Point-to-site User VPN and ExpressRoute user is automatic. To create the ExpressRoute connection, see [Create an ExpressRoute connection using Virtual WAN](virtual-wan-expressroute-portal.md). You can use an existing ExpressRoute circuit to connect to Azure Virtual WAN.
vpn-gateway Point To Site How To Radius Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-how-to-radius-ps.md
Title: 'Connect a computer to a virtual network using Point-to-Site and RADIUS authentication: PowerShell'
+ Title: 'Connect to a virtual network using P2S and RADIUS authentication: PowerShell'
-description: Learn how to connect Windows and OS X clients securely to a virtual network using P2S and RADIUS authentication.
-
+description: Learn how to connect VPN clients securely to a virtual network using P2S and RADIUS authentication.
- Previously updated : 07/27/2021 Last updated : 06/10/2022
-# Configure a Point-to-Site connection to a VNet using RADIUS authentication: PowerShell
-
-This article shows you how to create a VNet with a Point-to-Site connection that uses RADIUS authentication. This configuration is only available for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
-
-A Point-to-Site (P2S) VPN gateway lets you create a secure connection to your virtual network from an individual client computer. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location, such as when you are telecommuting from home or a conference. A P2S VPN is also a useful solution to use instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet.
+# Configure a point-to-site connection to a VNet using RADIUS authentication: PowerShell
-A P2S VPN connection is started from Windows and Mac devices. Connecting clients can use the following authentication methods:
+This article shows you how to create a VNet with a point-to-site (P2S) connection that uses RADIUS authentication. This configuration is only available for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can create this configuration using PowerShell or the Azure portal.
-* RADIUS server
-* VPN Gateway native certificate authentication
-* Native Azure Active Directory authentication (Windows 10 and later only)
+A point-to-site VPN gateway lets you create a secure connection to your virtual network from an individual client computer. P2S VPN connections are useful when you want to connect to your VNet from a remote location, such as when you're telecommuting from home or a conference. A P2S VPN is also a useful solution to use instead of a site-to-site VPN when you have only a few clients that need to connect to a VNet.
-This article helps you configure a P2S configuration with authentication using RADIUS server. If you want to authenticate using generated certificates and VPN gateway native certificate authentication instead, see [Configure a Point-to-Site connection to a VNet using VPN gateway native certificate authentication](vpn-gateway-howto-point-to-site-rm-ps.md) or [Create an Azure Active Directory tenant for P2S OpenVPN protocol connections](openvpn-azure-ad-tenant.md) for Azure Active Directory authentication.
+A P2S VPN connection is started from Windows and Mac devices. This article helps you configure a P2S configuration that uses a RADIUS server for authentication. If you want to authenticate using a different method, see the following articles:
-![Diagram that shows the P2S configuration with authentication using a RADIUS server.](./media/point-to-site-how-to-radius-ps/p2sradius.png)
+* [Certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md)
+* [Azure AD authentication](openvpn-azure-ad-tenant.md)
-Point-to-Site connections do not require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), OpenVPN or IKEv2.
+P2S connections don't require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), OpenVPN or IKEv2.
* SSTP is a TLS-based VPN tunnel that is supported only on Windows client platforms. It can penetrate firewalls, which makes it a good option to connect Windows devices to Azure from anywhere. On the server side, we support TLS version 1.2 only. For improved performance, scalability and security, consider using OpenVPN protocol instead.
Point-to-Site connections do not require a VPN device or a public-facing IP addr
* IKEv2 VPN, a standards-based IPsec VPN solution. IKEv2 VPN can be used to connect from Mac devices (macOS versions 10.11 and above).
-P2S connections require the following:
+For this configuration, connections require the following:
-* A RouteBased VPN gateway. 
+* A RouteBased VPN gateway.
* A RADIUS server to handle user authentication. The RADIUS server can be deployed on-premises, or in the Azure VNet. You can also configure two RADIUS servers for high availability.
-* A VPN client configuration package for the Windows devices that will connect to the VNet. A VPN client configuration package provides the settings required for a VPN client to connect over P2S.
+* The VPN client profile configuration package. The VPN client profile configuration package is a package that you generate. It provides the settings required for a VPN client to connect over P2S.
## <a name="aboutad"></a>About Active Directory (AD) Domain Authentication for P2S VPNs AD Domain authentication allows users to sign in to Azure using their organization domain credentials. It requires a RADIUS server that integrates with the AD server. Organizations can also leverage their existing RADIUS deployment.
-
-The RADIUS server can reside on-premises, or in your Azure VNet. During authentication, the VPN gateway acts as a pass-through and forwards authentication messages back and forth between the RADIUS server and the connecting device. It's important for the VPN gateway to be able to reach the RADIUS server. If the RADIUS server is located on-premises, then a VPN Site-to-Site connection from Azure to the on-premises site is required.
-Apart from Active Directory, a RADIUS server can also integrate with other external identity systems. This opens up plenty of authentication options for Point-to-Site VPNs, including MFA options. Check your RADIUS server vendor documentation to get the list of identity systems it integrates with.
+The RADIUS server can reside on-premises, or in your Azure VNet. During authentication, the VPN gateway acts as a pass-through and forwards authentication messages back and forth between the RADIUS server and the connecting device. It's important for the VPN gateway to be able to reach the RADIUS server. If the RADIUS server is located on-premises, then a VPN site-to-site connection from Azure to the on-premises site is required.
-![Connection diagram - RADIUS](./media/point-to-site-how-to-radius-ps/radiusimage.png)
+Apart from Active Directory, a RADIUS server can also integrate with other external identity systems. This opens up plenty of authentication options for P2S VPNs, including MFA options. Check your RADIUS server vendor documentation to get the list of identity systems it integrates with.
+ > [!IMPORTANT]
->Only a VPN Site-to-Site connection can be used for connecting to a RADIUS server on-premises. An ExpressRoute connection cannot be used.
->
+> Only a site-to-site VPN connection can be used for connecting to a RADIUS server on-premises. An ExpressRoute connection can't be used.
> ## <a name="before"></a>Before beginning
Verify that you have an Azure subscription. If you don't already have an Azure s
You can use the example values to create a test environment, or refer to these values to better understand the examples in this article. You can either use the steps as a walk-through and use the values without changing them, or change them to reflect your environment. * **Name: VNet1**
-* **Address space: 192.168.0.0/16** and **10.254.0.0/16**<br>For this example, we use more than one address space to illustrate that this configuration works with multiple address spaces. However, multiple address spaces are not required for this configuration.
+* **Address space: 10.1.0.0/16** and **10.254.0.0/16**<br>For this example, we use more than one address space to illustrate that this configuration works with multiple address spaces. However, multiple address spaces aren't required for this configuration.
* **Subnet name: FrontEnd**
- * **Subnet address range: 192.168.1.0/24**
+ * **Subnet address range: 10.1.0.0/24**
* **Subnet name: BackEnd** * **Subnet address range: 10.254.1.0/24** * **Subnet name: GatewaySubnet**<br>The Subnet name *GatewaySubnet* is mandatory for the VPN gateway to work.
- * **GatewaySubnet address range: 192.168.200.0/24**
-* **VPN client address pool: 172.16.201.0/24**<br>VPN clients that connect to the VNet using this Point-to-Site connection receive an IP address from the VPN client address pool.
-* **Subscription:** If you have more than one subscription, verify that you are using the correct one.
-* **Resource Group: TestRG**
+ * **GatewaySubnet address range: 10.1.255.0/27**
+* **VPN client address pool: 172.16.201.0/24**<br>VPN clients that connect to the VNet using this P2S connection receive an IP address from the VPN client address pool.
+* **Subscription:** If you've more than one subscription, verify that you're using the correct one.
+* **Resource Group: TestRG1**
* **Location: East US** * **DNS Server: IP address** of the DNS server that you want to use for name resolution for your VNet. (optional) * **GW Name: Vnet1GW**
You can use the example values to create a test environment, or refer to these v
## <a name="signin"></a>1. Set the variables
-Declare the variables that you want to use. Use the following sample, substituting the values for your own when necessary. If you close your PowerShell/Cloud Shell session at any point during the exercise, just copy and paste the values again to re-declare the variables.
+Declare the variables that you want to use. Use the following sample, substituting the values for your own when necessary. If you close your PowerShell/Cloud Shell session at any point during the exercise, just copy and paste the values again to redeclare the variables.
```azurepowershell-interactive $VNetName = "VNet1" $FESubName = "FrontEnd" $BESubName = "Backend" $GWSubName = "GatewaySubnet"
- $VNetPrefix1 = "192.168.0.0/16"
+ $VNetPrefix1 = "10.1.0.0/16"
$VNetPrefix2 = "10.254.0.0/16"
- $FESubPrefix = "192.168.1.0/24"
+ $FESubPrefix = "10.1.0.0/24"
$BESubPrefix = "10.254.1.0/24"
- $GWSubPrefix = "192.168.200.0/26"
+ $GWSubPrefix = "10.1.255.0/27"
$VPNClientAddressPool = "172.16.201.0/24"
- $RG = "TestRG"
+ $RG = "TestRG1"
$Location = "East US" $GWName = "VNet1GW" $GWIPName = "VNet1GWPIP"
- $GWIPconfName = "gwipconf"
+ $GWIPconfName = "gwipconf1"
``` ## 2. <a name="vnet"></a>Create the resource group, VNet, and Public IP address
The following steps create a resource group and a virtual network in the resourc
1. Create a resource group. ```azurepowershell-interactive
- New-AzResourceGroup -Name "TestRG" -Location "East US"
+ New-AzResourceGroup -Name "TestRG1" -Location "East US"
```
-2. Create the subnet configurations for the virtual network, naming them *FrontEnd*, *BackEnd*, and *GatewaySubnet*. These prefixes must be part of the VNet address space that you declared.
+
+1. Create the subnet configurations for the virtual network, naming them *FrontEnd*, *BackEnd*, and *GatewaySubnet*. These prefixes must be part of the VNet address space that you declared.
```azurepowershell-interactive
- $fesub = New-AzVirtualNetworkSubnetConfig -Name "FrontEnd" -AddressPrefix "192.168.1.0/24"
+ $fesub = New-AzVirtualNetworkSubnetConfig -Name "FrontEnd" -AddressPrefix "10.1.0.0/24"
$besub = New-AzVirtualNetworkSubnetConfig -Name "Backend" -AddressPrefix "10.254.1.0/24"
- $gwsub = New-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -AddressPrefix "192.168.200.0/24"
+ $gwsub = New-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -AddressPrefix "10.1.255.0/27"
```
-3. Create the virtual network.
- In this example, the -DnsServer server parameter is optional. Specifying a value does not create a new DNS server. The DNS server IP address that you specify should be a DNS server that can resolve the names for the resources you are connecting to from your VNet. For this example, we used a private IP address, but it is likely that this is not the IP address of your DNS server. Be sure to use your own values. The value you specify is used by the resources that you deploy to the VNet, not by the P2S connection.
+1. Create the virtual network.
+
+ In this example, the -DnsServer server parameter is optional. Specifying a value doesn't create a new DNS server. The DNS server IP address that you specify should be a DNS server that can resolve the names for the resources you're connecting to from your VNet. For this example, we used a private IP address, but it's likely that this isn't the IP address of your DNS server. Be sure to use your own values. The value you specify is used by the resources that you deploy to the VNet, not by the P2S connection.
```azurepowershell-interactive
- New-AzVirtualNetwork -Name "VNet1" -ResourceGroupName "TestRG" -Location "East US" -AddressPrefix "192.168.0.0/16","10.254.0.0/16" -Subnet $fesub, $besub, $gwsub -DnsServer 10.2.1.3
+ New-AzVirtualNetwork -Name "VNet1" -ResourceGroupName "TestRG1" -Location "East US" -AddressPrefix "10.1.0.0/16","10.254.0.0/16" -Subnet $fesub, $besub, $gwsub -DnsServer 10.2.1.3
```
-4. A VPN gateway must have a Public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created. VPN Gateway currently only supports *Dynamic* Public IP address allocation. You cannot request a Static Public IP address assignment. However, this does not mean that the IP address changes after it has been assigned to your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
+
+1. A VPN gateway must have a Public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created. VPN Gateway currently only supports *Dynamic* Public IP address allocation. You can't request a Static Public IP address assignment. However, this doesn't mean that the IP address changes after it has been assigned to your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
Specify the variables to request a dynamically assigned Public IP address. ```azurepowershell-interactive
- $vnet = Get-AzVirtualNetwork -Name "VNet1" -ResourceGroupName "TestRG"
+ $vnet = Get-AzVirtualNetwork -Name "VNet1" -ResourceGroupName "TestRG1"
$subnet = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet
- $pip = New-AzPublicIpAddress -Name "VNet1GWPIP" -ResourceGroupName "TestRG" -Location "East US" -AllocationMethod Dynamic
- $ipconf = New-AzVirtualNetworkGatewayIpConfig -Name "gwipconf" -Subnet $subnet -PublicIpAddress $pip
+ $pip = New-AzPublicIpAddress -Name "VNet1GWPIP" -ResourceGroupName "TestRG1" -Location "East US" -AllocationMethod Dynamic
+ $ipconf = New-AzVirtualNetworkGatewayIpConfig -Name "gwipconf1" -Subnet $subnet -PublicIpAddress $pip
``` ## 3. <a name="radius"></a>Set up your RADIUS server
-Before creating and configuring the virtual network gateway, your RADIUS server should be configured correctly for authentication.
+Before you create and configure the virtual network gateway, your RADIUS server should be configured correctly for authentication.
1. If you don’t have a RADIUS server deployed, deploy one. For deployment steps, refer to the setup guide provided by your RADIUS vendor.  
-2. Configure the VPN gateway as a RADIUS client on the RADIUS. When adding this RADIUS client, specify the virtual network GatewaySubnet that you created. 
-3. Once the RADIUS server is set up, get the RADIUS server's IP address and the shared secret that RADIUS clients should use to talk to the RADIUS server. If the RADIUS server is in the Azure VNet, use the CA IP of the RADIUS server VM.
+1. Configure the VPN gateway as a RADIUS client on the RADIUS. When adding this RADIUS client, specify the virtual network GatewaySubnet that you created.
+1. Once the RADIUS server is set up, get the RADIUS server's IP address and the shared secret that RADIUS clients should use to talk to the RADIUS server. If the RADIUS server is in the Azure VNet, use the CA IP of the RADIUS server VM.
The [Network Policy Server (NPS)](/windows-server/networking/technologies/nps/nps-top) article provides guidance about configuring a Windows RADIUS server (NPS) for AD domain authentication.
New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG `
``` ## 5. <a name="addradius"></a>Add the RADIUS server and client address pool
-
-* The -RadiusServer can be specified by name or by IP address. If you specify the name and the server resides on-premises, then the VPN gateway may not be able to resolve the name. If that’s the case, then it's better to specify the IP address of the server. 
+
+* The -RadiusServer can be specified by name or by IP address. If you specify the name and the server resides on-premises, then the VPN gateway may not be able to resolve the name. If thatΓÇÖs the case, then it's better to specify the IP address of the server.
* The -RadiusSecret should match what is configured on your RADIUS server.
-* The -VpnClientAddressPool is the range from which the connecting VPN clients receive an IP address. Use a private IP address range that does not overlap with the on-premises location that you will connect from, or with the VNet that you want to connect to. Ensure that you have a large enough address pool configured.  
+* The -VpnClientAddressPool is the range from which the connecting VPN clients receive an IP address. Use a private IP address range that doesn't overlap with the on-premises location that you'll connect from, or with the VNet that you want to connect to. Ensure that you have a large enough address pool configured.  
1. Create a secure string for the RADIUS secret.
New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG `
$Secure_Secret=Read-Host -AsSecureString -Prompt "RadiusSecret" ```
-2. You are prompted to enter the RADIUS secret. The characters that you enter will not be displayed and instead will be replaced by the "*" character.
+1. You're prompted to enter the RADIUS secret. The characters that you enter won't be displayed and instead will be replaced by the "*" character.
```azurepowershell-interactive RadiusSecret:*** ```
-3. Add the VPN client address pool and the RADIUS server information.
+
+1. Add the VPN client address pool and the RADIUS server information.
For SSTP configurations:
New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG `
```azurepowershell-interactive $Gateway = Get-AzVirtualNetworkGateway -ResourceGroupName $RG -Name $GWName
- Set-AzVirtualNetworkGateway -VirtualNetworkGateway $Gateway -VpnClientRootCertificates @()
+ Set-AzVirtualNetworkGateway -VirtualNetworkGateway $Gateway -VpnClientRootCertificates @()
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $Gateway ` -VpnClientAddressPool "172.16.201.0/24" -VpnClientProtocol "OpenVPN" ` -RadiusServerAddress "10.51.0.15" -RadiusServerSecret $Secure_Secret ``` - For IKEv2 configurations: ```azurepowershell-interactive
New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG `
-RadiusServerAddress "10.51.0.15" -RadiusServerSecret $Secure_Secret ```
- For SSTP + IKEv2
+ For SSTP + IKEv2:
```azurepowershell-interactive $Gateway = Get-AzVirtualNetworkGateway -ResourceGroupName $RG -Name $GWName
New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG `
-RadiusServerAddress "10.51.0.15" -RadiusServerSecret $Secure_Secret ```
- To specify **two** RADIUS servers use the following syntax. Modify the **-VpnClientProtocol** value as needed
+ To specify **two** RADIUS servers, use the following syntax. Modify the **-VpnClientProtocol** value as needed.
```azurepowershell-interactive $radiusServer1 = New-AzRadiusServer -RadiusServerAddress 10.1.0.15 -RadiusServerSecret $radiuspd -RadiusServerScore 30
New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG `
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $actual -VpnClientAddressPool 201.169.0.0/16 -VpnClientProtocol "IkeV2" -RadiusServerList $radiusServers ```
-## 6. <a name="vpnclient"></a>Download the VPN client configuration package and set up the VPN client
-
-The VPN client configuration lets devices connect to a VNet over a P2S connection. To generate a VPN client configuration package and set up the VPN client, see [Create a VPN Client Configuration for RADIUS authentication](point-to-site-vpn-client-configuration-radius.md).
-
-## <a name="connect"></a>7. Connect to Azure
-
-### To connect from a Windows VPN client
+## 6. <a name="vpnclient"></a>Configure the VPN client
-1. To connect to your VNet, on the client computer, navigate to VPN connections and locate the VPN connection that you created. It is named the same name as your virtual network. Enter your domain credentials and clickΓÇ»'Connect'. A pop-up message requesting elevated rights appears. Accept it and enter the credentials.
+The VPN client profile configuration packages contain the settings that help you configure VPN client profiles for a connection to the Azure VNet.
- ![VPN client connects to Azure](./media/point-to-site-how-to-radius-ps/client.png)
-2. Your connection is established.
+To generate a VPN client configuration package and configure a VPN client, see one of the following articles:
- ![Connection established](./media/point-to-site-how-to-radius-ps/connected.png)
+* [RADIUS - certificate authentication for VPN clients](point-to-site-vpn-client-configuration-radius-certificate.md)
+* [RADIUS - password authentication for VPN clients](point-to-site-vpn-client-configuration-radius-password.md)
+* [RADIUS - other authentication methods for VPN clients](point-to-site-vpn-client-configuration-radius-other.md)
-### Connect from a Mac VPN client
+## <a name="connect"></a>7. Connect to Azure
-From the Network dialog box, locate the client profile that you want to use, then click **Connect**.
+Use the steps in one of the following articles to connect to Azure.
- ![Mac connection](./media/vpn-gateway-howto-point-to-site-rm-ps/applyconnect.png)
+* [Windows native VPN client](point-to-site-vpn-client-configuration-radius-certificate.md#windows-vpn-client)
+* [macOS VPN client](point-to-site-vpn-client-configuration-radius-certificate.md#mac-macos-vpn-client)
## <a name="verify"></a>To verify your connection 1. To verify that your VPN connection is active, open an elevated command prompt, and run *ipconfig/all*.
-2. View the results. Notice that the IP address you received is one of the addresses within the Point-to-Site VPN Client Address Pool that you specified in your configuration. The results are similar to this example:
+1. View the results. Notice that the IP address you received is one of the addresses within the P2S VPN Client Address Pool that you specified in your configuration. The results are similar to this example:
``` PPP adapter VNet1:
To troubleshoot a P2S connection, see [Troubleshooting Azure point-to-site conne
* Verify that the VPN client configuration package was generated after the DNS server IP addresses were specified for the VNet. If you updated the DNS server IP addresses, generate and install a new VPN client configuration package.
-* Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you are connecting. If the IP address is within the address range of the VNet that you are connecting to, or within the address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
+* Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you're connecting. If the IP address is within the address range of the VNet that you're connecting to, or within the address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
## <a name="faq"></a>FAQ
-This FAQ applies to P2S using RADIUS authentication
-
+For FAQ information, see the [Point-to-site - RADIUS authentication](vpn-gateway-vpn-faq.md#P2SRADIUS) section of the FAQ.
## Next steps
vpn-gateway Vpn Gateway Modify Local Network Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-modify-local-network-gateway-portal.md
Title: 'Modify gateway IP address settings: Azure portal'
-description: Learn how to change IP address prefixes for your local network gateway using the Azure portal.
+description: Learn how to change IP address prefixes and configure BGP Settings for your local network gateway using the Azure portal.
Previously updated : 10/28/2021 Last updated : 06/13/2022 # Modify local network gateway settings using the Azure portal
-Sometimes the settings for your local network gateway AddressPrefix or GatewayIPAddress change. This article shows you how to modify your local network gateway settings. You can also modify these settings using a different method by selecting a different option from the following list:
+Sometimes the settings for your local network gateway AddressPrefix or GatewayIPAddress change, or you need to configure BGP settings. This article shows you how to modify your local network gateway settings. You can also modify these settings using a different method by selecting a different option from the following list:
> [!div class="op_single_selector"] > * [Azure portal](vpn-gateway-modify-local-network-gateway-portal.md)
Sometimes the settings for your local network gateway AddressPrefix or GatewayIP
## <a name="configure-lng"></a>Local network gateway configuration
-The screenshot below shows the **Configuration** page of a local network gateway resource using public IP address endpoint:
+The screenshot below shows the **Configuration** page of a local network gateway resource using public IP address endpoint. **BGP Settings** is selected to reveal available settings.
This is the configuration page with an FQDN endpoint: ## <a name="ip"></a>To modify the gateway IP address or FQDN > [!NOTE]
-> You cannot change a local network gateway between FQDN endpoint and IP address endpoint. You must delete all connections associated with this local network gateway, create a new one with the new endpoint (IP address or FQDN), then recreate the connections.
+> You can't change a local network gateway between FQDN endpoint and IP address endpoint. You must delete all connections associated with this local network gateway, create a new one with the new endpoint (IP address or FQDN), then recreate the connections.
> If the VPN device to which you want to connect has changed its public IP address, modify the local network gateway using the following steps: 1. On the Local Network Gateway resource, in the **Settings** section, select **Configuration**.
-2. In the **IP address** box, modify the IP address.
-3. Select **Save** to save the settings.
+1. In the **IP address** box, modify the IP address.
+1. Select **Save** to save the settings.
If the VPN device to which you want to connect has changed its FQDN (Fully Qualified Domain Name), modify the local network gateway using the following steps: 1. On the Local Network Gateway resource, in the **Settings** section, select **Configuration**.
-2. In the **FQDN** box, modify the domain name.
-3. Select **Save** to save the settings.
+1. In the **FQDN** box, modify the domain name.
+1. Select **Save** to save the settings.
## <a name="ipaddprefix"></a>To modify IP address prefixes To add additional address prefixes: 1. On the Local Network Gateway resource, in the **Settings** section, select **Configuration**.
-2. Add the IP address space in the *Add additional address range* box.
-3. Select **Save** to save your settings.
+1. Add the IP address space in the *Add additional address range* box.
+1. Select **Save** to save your settings.
To remove address prefixes: 1. On the Local Network Gateway resource, in the **Settings** section, select **Configuration**.
-2. Select the **'...'** on the line containing the prefix you want to remove.
-3. Select **Remove**.
-4. Select **Save** to save your settings.
+1. Select the **'...'** on the line containing the prefix you want to remove.
+1. Select **Remove**.
+1. Select **Save** to save your settings.
## <a name="bgp"></a>To modify BGP settings To add or update BGP settings: 1. On the Local Network Gateway resource, in the **Settings** section, select **Configuration**.
-2. Select **"Configure BGP settings"** to display or update the BGP configurations for this local network gateway
-3. Add or update the Autonomous system number or BGP peer IP address in the corresponding fields
-4. Select **Save** to save your settings.
+1. For **Configure BGP settings**, select **Yes** to display or update the BGP configurations for this local network gateway
+1. Add or update the Autonomous system number or BGP peer IP address in the corresponding fields
+1. Select **Save** to save your settings.
To remove BGP settings: 1. On the Local Network Gateway resource, in the **Settings** section, select **Configuration**.
-2. Unselect the **"Configure BGP settings"** to remove the existing BGP ASN and BGP peer IP address
-3. Select **Save** to save your settings.
+1. For **Configure BGP settings**, select **No** to remove the existing BGP ASN and BGP peer IP address.
+1. Select **Save** to save your settings.
## Next steps
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
Previously updated : 05/25/2022 Last updated : 06/10/2022 # VPN Gateway FAQ