Updates from: 06/14/2022 01:11:26
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Hr Attribute Retrieval Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-attribute-retrieval-issues.md
# Troubleshoot HR attribute retrieval issues
-## Provisioning app is not fetching all Workday attributes
-**Applies to:**
-* Workday to on-premises Active Directory user provisioning
-* Workday to Azure Active Directory user provisioning
-
-| Troubleshooting | Details |
-|-- | -- |
-| **Issue** | You have just setup the Workday inbound provisioning app and successfully connected to the Workday tenant URL. You ran a test sync and you observed that the provisioning app is not retrieving all attributes from Workday. Only some attributes are read and provisioned to the target. |
-| **Cause** | By default, the Workday provisioning app ships with attribute mapping and XPATH definitions that work with Workday Web Services (WWS) v21.1. When configuring connectivity to Workday in the provisioning app, if you explicitly specified the WWS API version (example: `https://wd3-impl-services1.workday.com/ccx/service/contoso4/Human_Resources/v34.0`), then you may run into this issue, because of the mismatch between WWS API version and the XPATH definitions. |
-| **Resolution** | * *Option 1*: Remove the WWS API version information from the URL and use the default WWS API version v21.1 <br> * *Option 2*: Manually update the XPATH API expressions so it is compatible with your preferred WWS API version. Update the **XPATH API expressions** under **Attribute Mapping -> Advanced Options -> Edit attribute list for Workday** referring to the section [Workday attribute reference](../app-provisioning/workday-attribute-reference.md#xpath-values-for-workday-web-services-wws-api-v30) |
-
-## Provisioning app is not fetching Workday integration system attributes / calculated fields
-**Applies to:**
-* Workday to on-premises Active Directory user provisioning
-* Workday to Azure Active Directory user provisioning
-
-| Troubleshooting | Details |
-|-- | -- |
-| **Issue** | You have just setup the Workday inbound provisioning app and successfully connected to the Workday tenant URL. You have an integration system configured in Workday and you have configured XPATHs that point to attributes in the Workday Integration System. However, the Azure AD provisioning app is not fetching values associated with these integration system attributes or calculated fields. |
-| **Cause** | This is a known limitation. The Workday provisioning app currently does not support fetching calculated fields/integration system attributes. |
-| **Resolution** | There is no workaround for this limitation. |
+## Issue fetching Workday attributes
++
+| **Applies to** |
+|--|
+| * Workday to on-premises Active Directory user provisioning <br> * Workday to Azure Active Directory user provisioning |
+| **Issue Description** |
+| You have just configured the Workday inbound provisioning app and successfully connected to the Workday tenant URL. You ran a test sync and you observed that the provisioning app is not retrieving certain attributes from Workday. Only some attributes are read and provisioned to the target. |
+| **Probable Cause** |
+| By default, the Workday provisioning app ships with attribute mapping and XPATH definitions that work with Workday Web Services (WWS) v21.1. When configuring connectivity to Workday in the provisioning app, if you explicitly specified the WWS API version (example: `https://wd3-impl-services1.workday.com/ccx/service/contoso4/Human_Resources/v34.0`), then you may run into this issue, because of the mismatch between WWS API version and the XPATH definitions. |
+| **Resolution Options** |
+| * *Option 1*: Remove the WWS API version information from the URL and use the default WWS API version v21.1 <br> * *Option 2*: Manually update the XPATH API expressions so it is compatible with your preferred WWS API version. Update the **XPATH API expressions** under **Attribute Mapping -> Advanced Options -> Edit attribute list for Workday** referring to the section [Workday attribute reference](../app-provisioning/workday-attribute-reference.md#xpath-values-for-workday-web-services-wws-api-v30) |
+
+## Issue fetching Workday calculated fields
+
+| **Applies to** |
+|--|
+| * Workday to on-premises Active Directory user provisioning <br> * Workday to Azure Active Directory user provisioning |
+| **Issue Description** |
+| You have just configured the Workday inbound provisioning app and successfully connected to the Workday tenant URL. You have an integration system configured in Workday and you have configured XPATHs that point to attributes in the Workday Integration System. However, the Azure AD provisioning app isn't fetching values associated with these integration system attributes or calculated fields. |
+| **Cause** |
+| This is a known limitation. The Workday provisioning app currently doesn't support fetching calculated fields/integration system attributes using the *Field_And_Parameter_Criteria_Data* Get_Workers request filter. |
+| **Resolution Options** |
+| You could consider a workaround of either using Workday Provisioning groups or Workday Custom ID field. See details below. |
+
+**Suggested workarounds**
+ * **Option 1: Using Workday Provisioning Groups**: Check if the calculated field value can be represented as a provisioning group in Workday. Using the same logic that is used for the calculated field, your Workday Admin may be able to assign a Provisioning Group to the user. Reference Workday doc that requires Workday login: [Set Up Account Provisioning Groups](https://doc.workday.com/reader/3DMnG~27o049IYFWETFtTQ/keT9jI30zCzj4Nu9pJfGeQ). Once configured, this Provisioning Group assignment can be [retrieved in the provisioning job](../app-provisioning/workday-integration-reference.md#example-3-retrieving-provisioning-group-assignments) and used in attribute mappings and scoping filter.
+* **Option 2: Using Workday Custom IDs**: Check if the calculated field value can be represented as a Custom ID on the Worker Profile. Use `Maintain Custom ID Type` task in Workday to define a new type and populate values in this custom ID. Make sure the [Workday ISU account used for the integration](../saas-apps/workday-inbound-tutorial.md#configuring-domain-security-policy-permissions) has domain security permission for `Person Data: ID Information`. For example, you can define "External_Payroll_ID" as a custom ID in Workday and retrieved it using the XPATH: `wd:Worker/wd:Worker_Data/wd:Personal_Data/wd:Identification_Data/wd:Custom_ID/wd:Custom_ID_Data[wd:ID_Type_Reference/wd:ID[@wd:type=\"Custom_ID_Type_ID\"]=\"External_Payroll_ID\"]/wd:ID/text()`
+ ## Next steps
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
# Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory
-Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. To learn more about Application Proxy, see [What is App Proxy?](what-is-application-proxy.md). This tutorial prepares your environment for use with Application Proxy. Once your environment is ready, you'll use the Azure portal to add an on-premises application to your Azure AD tenant.
+Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. To learn more about Application Proxy, see [What is App Proxy?](what-is-application-proxy.md). This tutorial prepares your environment for use with Application Proxy. Once your environment is ready, you'll use the Azure portal to add an on-premises application to your Azure AD tenant.
:::image type="content" source="./media/application-proxy-add-on-premises-application/app-proxy-diagram.png" alt-text="Application Proxy Overview Diagram" lightbox="./media/application-proxy-add-on-premises-application/app-proxy-diagram.png":::
For high availability in your production environment, we recommend having more t
> > ``` > Windows Registry Editor Version 5.00
->
+>
> [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp] > "EnableDefaultHTTP2"=dword:00000000 > ```
To install the connector:
1. Read the Terms of Service. When you're ready, select **Accept terms & Download**. 1. At the bottom of the window, select **Run** to install the connector. An install wizard opens. 1. Follow the instructions in the wizard to install the service. When you're prompted to register the connector with the Application Proxy for your Azure AD tenant, provide your application administrator credentials.
-
+ - For Internet Explorer (IE), if **IE Enhanced Security Configuration** is set to **On**, you may not see the registration screen. To get access, follow the instructions in the error message. Make sure that **Internet Explorer Enhanced Security Configuration** is set to **Off**. ### General remarks
If you choose to have more than one Windows server for your on-premises applicat
If you have installed connectors in different regions, you can optimize traffic by selecting the closest Application Proxy cloud service region to use with each connector group, see [Optimize traffic flow with Azure Active Directory Application Proxy](application-proxy-network-topology.md)
-If your organization uses proxy servers to connect to the internet, you need to configure them for Application Proxy. For more information, see [Work with existing on-premises proxy servers](./application-proxy-configure-connectors-with-proxy-servers.md).
+If your organization uses proxy servers to connect to the internet, you need to configure them for Application Proxy. For more information, see [Work with existing on-premises proxy servers](./application-proxy-configure-connectors-with-proxy-servers.md).
For information about connectors, capacity planning, and how they stay up-to-date, see [Understand Azure AD Application Proxy connectors](application-proxy-connectors.md).
To confirm the connector installed and registered correctly:
## Add an on-premises app to Azure AD
-Now that you've prepared your environment and installed a connector, you're ready to add on-premises applications to Azure AD.
+Now that you've prepared your environment and installed a connector, you're ready to add on-premises applications to Azure AD.
1. Sign in as an administrator in the [Azure portal](https://portal.azure.com/). 2. In the left navigation panel, select **Azure Active Directory**. 3. Select **Enterprise applications**, and then select **New application**.
-4. Select **Add an on-premises application** button which appears about halfway down the page in the **On-premises applications** section. Alternatively, you can select **Create your own application** at the top of the page and then select **Configure Application Proxy for secure remote access to an on-premise application**.
+4. Select **Add an on-premises application** button which appears about halfway down the page in the **On-premises applications** section. Alternatively, you can select **Create your own application** at the top of the page and then select **Configure Application Proxy for secure remote access to an on-premises application**.
5. In the **Add your own on-premises application** section, provide the following information about your application: | Field | Description |
Now that you've prepared your environment and installed a connector, you're read
| **Pre Authentication** | How Application Proxy verifies users before giving them access to your application.<br><br>**Azure Active Directory** - Application Proxy redirects users to sign in with Azure AD, which authenticates their permissions for the directory and application. We recommend keeping this option as the default so that you can take advantage of Azure AD security features like Conditional Access and Multi-Factor Authentication. **Azure Active Directory** is required for monitoring the application with Microsoft Cloud Application Security.<br><br>**Passthrough** - Users don't have to authenticate against Azure AD to access the application. You can still set up authentication requirements on the backend. | | **Connector Group** | Connectors process the remote access to your application, and connector groups help you organize connectors and apps by region, network, or purpose. If you don't have any connector groups created yet, your app is assigned to **Default**.<br><br>If your application uses WebSockets to connect, all connectors in the group must be version 1.5.612.0 or later. |
-6. If necessary, configure **Additional settings**. For most applications, you should keep these settings in their default states.
+6. If necessary, configure **Additional settings**. For most applications, you should keep these settings in their default states.
| Field | Description | | : | :-- |
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
To publish your app through Application Proxy with a custom domain:
![Add CNAME DNS entry](./media/application-proxy-configure-custom-domain/dns-info.png)
-10. Follow the instructions at [Manage DNS records and record sets by using the Azure portal](../../dns/dns-operations-recordsets-portal.md) to add a DNS record that redirects the new external URL to the *msappproxy.net* domain.
+10. Follow the instructions at [Manage DNS records and record sets by using the Azure portal](../../dns/dns-operations-recordsets-portal.md) to add a DNS record that redirects the new external URL to the *msappproxy.net* domain in Azure DNS. If a different DNS provider is used, please contact the vendor for the instructions.
> [!IMPORTANT] > Ensure that you are properly using a CNAME record that points to the *msappproxy.net* domain. Do not point records to IP addresses or server DNS names since these are not static and may impact the resiliency of the service.
active-directory Howto Authentication Passwordless Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-faqs.md
On a Windows Server 2016 or 2019 domain controller, check that the following pat
### Can I deploy the FIDO2 credential provider on an on-premises only device?
-No, this feature isn't supported for on-premise only device. The FIDO2 credential provider wouldn't show up.
+No, this feature isn't supported for on-premises only device. The FIDO2 credential provider wouldn't show up.
### FIDO2 security key sign-in isn't working for my Domain Admin or other high privilege accounts. Why?
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
Yes, Permissions Management has various types of system report available that ca
For information about permissions usage reports, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
-## Does Permissions Management integrate with third-party ITSM (Information Technology Security Management) tools?
+
+## Does Permissions Management integrate with third-party ITSM (Information Technology Service Management) tools?
Permissions Management integrates with ServiceNow.
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
Previously updated : 10/11/2021 Last updated : 06/13/2022
Here is a sample error response:
```json { "error": "invalid_scope",
- "error_description": "AADSTS70011: The provided value for the input parameter 'scope' is not valid. The scope https://example.contoso.com/activity.read is not valid.\r\nTrace ID: 255d1aef-8c98-452f-ac51-23d051240864\r\nCorrelation ID: fb3d2015-bc17-4bb9-bb85-30c5cf1aaaa7\r\nTimestamp: 2016-01-09 02:02:12Z",
+ "error_description": "AADSTS70011: The provided value for the input parameter 'scope' isn't valid. The scope https://example.contoso.com/activity.read isn't valid.\r\nTrace ID: 255d1aef-8c98-452f-ac51-23d051240864\r\nCorrelation ID: fb3d2015-bc17-4bb9-bb85-30c5cf1aaaa7\r\nTimestamp: 2016-01-09 02:02:12Z",
"error_codes": [ 70011 ],
The `error` field has several possible values - review the protocol documentatio
| `invalid_grant` | Some of the authentication material (auth code, refresh token, access token, PKCE challenge) was invalid, unparseable, missing, or otherwise unusable | Try a new request to the `/authorize` endpoint to get a new authorization code. Consider reviewing and validating that app's use of the protocols. | | `unauthorized_client` | The authenticated client isn't authorized to use this authorization grant type. | This usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | | `invalid_client` | Client authentication failed. | The client credentials aren't valid. To fix, the application administrator updates the credentials. |
-| `unsupported_grant_type` | The authorization server does not support the authorization grant type. | Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. |
-| `invalid_resource` | The target resource is invalid because it does not exist, Azure AD can't find it, or it's not correctly configured. | This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. During development, this usually indicates an incorrectly setup test tenant or a typo in the name of the scope being requested. |
+| `unsupported_grant_type` | The authorization server doesn't support the authorization grant type. | Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. |
+| `invalid_resource` | The target resource is invalid because it doesn't exist, Azure AD can't find it, or it's not correctly configured. | This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. During development, this usually indicates an incorrectly setup test tenant or a typo in the name of the scope being requested. |
| `interaction_required` | The request requires user interaction. For example, an additional authentication step is required. | Retry the request with the same resource, interactively, so that the user can complete any challenges required. | | `temporarily_unavailable` | The server is temporarily too busy to handle the request. | Retry the request. The client application might explain to the user that its response is delayed because of a temporary condition. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS20001 | WsFedSignInResponseError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS20012 | WsFedMessageInvalid - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS20033 | FedMetadataInvalidTenantName - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
-| AADSTS28002 | Provided value for the input parameter scope '{scope}' is not valid when requesting an access token. Please specify a valid scope. |
-| AADSTS28003 | Provided value for the input parameter scope cannot be empty when requesting an access token using the provided authorization code. Please specify a valid scope.|
+| AADSTS28002 | Provided value for the input parameter scope '{scope}' isn't valid when requesting an access token. Please specify a valid scope. |
+| AADSTS28003 | Provided value for the input parameter scope can't be empty when requesting an access token using the provided authorization code. Please specify a valid scope.|
| AADSTS40008 | OAuth2IdPUnretryableServerError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS40009 | OAuth2IdPRefreshTokenRedemptionUserError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS40010 | OAuth2IdPRetryableServerError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS40015 | OAuth2IdPAuthCodeRedemptionUserError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS50000 | TokenIssuanceError - There's an issue with the sign-in service. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to resolve this issue. |
-| AADSTS50001 | InvalidResource - The resource is disabled or does not exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you are trying to access. |
+| AADSTS50001 | InvalidResource - The resource is disabled or doesn't exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you're trying to access. |
| AADSTS50002 | NotAllowedTenant - Sign-in failed because of a restricted proxy access on the tenant. If it's your own tenant policy, you can change your restricted tenant settings to fix this issue. |
-| AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that is not in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
+| AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that isn't in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
| AADSTS50003 | MissingSigningKey - Sign-in failed because of a missing signing key or certificate. This might be because there was no signing key configured in the app. To learn more, see the troubleshooting article for error [AADSTS50003](/troubleshoot/azure/active-directory/error-code-aadsts50003-cert-or-key-not-configured). If you still see issues, contact the app owner or an app admin. | | AADSTS50005 | DevicePolicyError - User tried to log in to a device from a platform that's currently not supported through Conditional Access policy. | | AADSTS50006 | InvalidSignature - Signature verification failed because of an invalid signature. | | AADSTS50007 | PartnerEncryptionCertificateMissing - The partner encryption certificate was not found for this app. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Microsoft to get this fixed. | | AADSTS50008 | InvalidSamlToken - SAML assertion is missing or misconfigured in the token. Contact your federation provider. | | AADSTS50010 | AudienceUriValidationFailed - Audience URI validation for the app failed since no token audiences were configured. |
-| AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or does not match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you. To learn more, see the troubleshooting article for error [AADSTS50011](/troubleshoot/azure/active-directory/error-code-aadsts50011-reply-url-mismatch).|
-| AADSTS50012 | AuthenticationFailed - Authentication failed for one of the following reasons:<ul><li>The subject name of the signing certificate is not authorized</li><li>A matching trusted authority policy was not found for the authorized subject name</li><li>The certificate chain is not valid</li><li>The signing certificate is not valid</li><li>Policy is not configured on the tenant</li><li>Thumbprint of the signing certificate is not authorized</li><li>Client assertion contains an invalid signature</li></ul> |
-| AADSTS50013 | InvalidAssertion - Assertion is invalid because of various reasons - The token issuer doesn't match the api version within its valid time range -expired -malformed - Refresh token in the assertion is not a primary refresh token. |
-| AADSTS50014 | GuestUserInPendingState - The user's redemption is in a pending state. The guest user account is not fully created yet. |
+| AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or doesn't match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you. To learn more, see the troubleshooting article for error [AADSTS50011](/troubleshoot/azure/active-directory/error-code-aadsts50011-reply-url-mismatch).|
+| AADSTS50012 | AuthenticationFailed - Authentication failed for one of the following reasons:<ul><li>The subject name of the signing certificate isn't authorized</li><li>A matching trusted authority policy was not found for the authorized subject name</li><li>The certificate chain isn't valid</li><li>The signing certificate isn't valid</li><li>Policy isn't configured on the tenant</li><li>Thumbprint of the signing certificate isn't authorized</li><li>Client assertion contains an invalid signature</li></ul> |
+| AADSTS50013 | InvalidAssertion - Assertion is invalid because of various reasons - The token issuer doesn't match the api version within its valid time range -expired -malformed - Refresh token in the assertion isn't a primary refresh token. |
+| AADSTS50014 | GuestUserInPendingState - The user's redemption is in a pending state. The guest user account isn't fully created yet. |
| AADSTS50015 | ViralUserLegalAgeConsentRequiredState - The user requires legal age group consent. | | AADSTS50017 | CertificateValidationFailed - Certification validation failed, reasons for the following reasons:<ul><li>Cannot find issuing certificate in trusted certificates list</li><li>Unable to find expected CrlSegment</li><li>Cannot find issuing certificate in trusted certificates list</li><li>Delta CRL distribution point is configured without a corresponding CRL distribution point</li><li>Unable to retrieve valid CRL segments because of a timeout issue</li><li>Unable to download CRL</li></ul>Contact the tenant admin. | | AADSTS50020 | UserUnauthorized - Users are unauthorized to call this endpoint. |
-| AADSTS500212 | NotAllowedByOutboundPolicyTenant - The user's administrator has set an outbound access policy that does not allow access to the resource tenant. |
-| AADSTS500213 | NotAllowedByInboundPolicyTenant - The resource tenant's cross-tenant access policy does not allow this user to access this tenant. |
-| AADSTS50027 | InvalidJwtToken - Invalid JWT token because of the following reasons:<ul><li>doesn't contain nonce claim, sub claim</li><li>subject identifier mismatch</li><li>duplicate claim in idToken claims</li><li>unexpected issuer</li><li>unexpected audience</li><li>not within its valid time range </li><li>token format is not proper</li><li>External ID token from issuer failed signature verification.</li></ul> |
+| AADSTS500212 | NotAllowedByOutboundPolicyTenant - The user's administrator has set an outbound access policy that doesn't allow access to the resource tenant. |
+| AADSTS500213 | NotAllowedByInboundPolicyTenant - The resource tenant's cross-tenant access policy doesn't allow this user to access this tenant. |
+| AADSTS50027 | InvalidJwtToken - Invalid JWT token because of the following reasons:<ul><li>doesn't contain nonce claim, sub claim</li><li>subject identifier mismatch</li><li>duplicate claim in idToken claims</li><li>unexpected issuer</li><li>unexpected audience</li><li>not within its valid time range </li><li>token format isn't proper</li><li>External ID token from issuer failed signature verification.</li></ul> |
| AADSTS50029 | Invalid URI - domain name contains invalid characters. Contact the tenant admin. | | AADSTS50032 | WeakRsaKey - Indicates the erroneous user attempt to use a weak RSA key. | | AADSTS50033 | RetryableError - Indicates a transient error not related to the database operations. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50050 | MalformedDiscoveryRequest - The request is malformed. | | AADSTS50053 | This error can result from two different reasons: <br><ul><li>IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. The user is blocked due to repeated sign-in attempts. See [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md).</li><li>Or, sign-in was blocked because it came from an IP address with malicious activity.</li></ul> <br>To determine which failure reason caused this error, sign in to the [Azure portal](https://portal.azure.com). Navigate to your Azure AD tenant and then **Monitoring** -> **Sign-ins**. Find the failed user sign-in with **Sign-in error code** 50053 and check the **Failure reason**.| | AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. The user's password is expired, and therefore their login or session was ended. They will be offered the opportunity to reset it, or may ask an admin to reset it via [Reset a user's password using Azure Active Directory](../fundamentals/active-directory-users-reset-password-azure-portal.md). |
-| AADSTS50056 | Invalid or null password: password does not exist in the directory for this user. The user should be asked to enter their password again. |
+| AADSTS50056 | Invalid or null password: password doesn't exist in the directory for this user. The user should be asked to enter their password again. |
| AADSTS50057 | UserDisabled - The user account is disabled. The user object in Active Directory backing this account has been disabled. An admin can re-enable this account [through PowerShell](/powershell/module/activedirectory/enable-adaccount) |
-| AADSTS50058 | UserInformationNotProvided - Session information is not sufficient for single-sign-on. This means that a user is not signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
+| AADSTS50058 | UserInformationNotProvided - Session information isn't sufficient for single-sign-on. This means that a user isn't signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
| AADSTS50059 | MissingTenantRealmAndNoUserInformationProvided - Tenant-identifying information was not found in either the request or implied by any provided credentials. The user can contact the tenant admin to help resolve the issue. | | AADSTS50061 | SignoutInvalidRequest - Unable to complete signout. The request was invalid. | | AADSTS50064 | CredentialAuthenticationError - Credential validation on username or password has failed. |
-| AADSTS50068 | SignoutInitiatorNotParticipant - Sign out has failed. The app that initiated sign out is not a participant in the current session. |
+| AADSTS50068 | SignoutInitiatorNotParticipant - Sign out has failed. The app that initiated sign out isn't a participant in the current session. |
| AADSTS50070 | SignoutUnknownSessionIdentifier - Sign out has failed. The sign out request specified a name identifier that didn't match the existing session(s). | | AADSTS50071 | SignoutMessageExpired - The logout request has expired. | | AADSTS50072 | UserStrongAuthEnrollmentRequiredInterrupt - User needs to enroll for second factor authentication (interactive). |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50089 | Authentication failed due to flow token expired. Expected - auth codes, refresh tokens, and sessions expire over time or are revoked by the user or an admin. The app will request a new login from the user. | | AADSTS50097 | DeviceAuthenticationRequired - Device authentication is required. | | AADSTS50099 | PKeyAuthInvalidJwtUnauthorized - The JWT signature is invalid. |
-| AADSTS50105 | EntitlementGrantsNotFound - The signed in user is not assigned to a role for the signed in app. Assign the user to the app. To learn more, see the troubleshooting article for error [AADSTS50105](/troubleshoot/azure/active-directory/error-code-aadsts50105-user-not-assigned-role). |
-| AADSTS50107 | InvalidRealmUri - The requested federation realm object does not exist. Contact the tenant admin. |
+| AADSTS50105 | EntitlementGrantsNotFound - The signed in user isn't assigned to a role for the signed in app. Assign the user to the app. To learn more, see the troubleshooting article for error [AADSTS50105](/troubleshoot/azure/active-directory/error-code-aadsts50105-user-not-assigned-role). |
+| AADSTS50107 | InvalidRealmUri - The requested federation realm object doesn't exist. Contact the tenant admin. |
| AADSTS50120 | ThresholdJwtInvalidJwtFormat - Issue with JWT header. Contact the tenant admin. | | AADSTS50124 | ClaimsTransformationInvalidInputParameter - Claims Transformation contains invalid input parameter. Contact the tenant admin to update the policy. | | AADSTS501241 | Mandatory Input '{paramName}' missing from transformation id '{transformId}'. This error is returned while Azure AD is trying to build a SAML response to the application. NameID claim or NameIdentifier is mandatory in SAML response and if Azure AD failed to get source attribute for NameID claim, it will return this error. As a resolution, ensure you add claim rules in Azure Portal > Azure Active Directory > Enterprise Applications > Select your application > Single Sign-On > User Attributes & Claims > Unique User Identifier (Name ID). |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50128 | Invalid domain name - No tenant-identifying information found in either the request or implied by any provided credentials. | | AADSTS50129 | DeviceIsNotWorkplaceJoined - Workplace join is required to register the device. | | AADSTS50131 | ConditionalAccessFailed - Indicates various Conditional Access errors such as bad Windows device state, request blocked due to suspicious activity, access policy, or security policy decisions. |
-| AADSTS50132 | SsoArtifactInvalidOrExpired - The session is not valid due to password expiration or recent password change. |
-| AADSTS50133 | SsoArtifactRevoked - The session is not valid due to password expiration or recent password change. |
+| AADSTS50132 | SsoArtifactInvalidOrExpired - The session isn't valid due to password expiration or recent password change. |
+| AADSTS50133 | SsoArtifactRevoked - The session isn't valid due to password expiration or recent password change. |
| AADSTS50134 | DeviceFlowAuthorizeWrongDatacenter - Wrong data center. To authorize a request that was initiated by an app in the OAuth 2.0 device flow, the authorizing party must be in the same data center where the original request resides. | | AADSTS50135 | PasswordChangeCompromisedPassword - Password change is required due to account risk. | | AADSTS50136 | RedirectMsaSessionToApp - Single MSA session detected. | | AADSTS50139 | SessionMissingMsaOAuth2RefreshToken - The session is invalid due to a missing external refresh token. | | AADSTS50140 | KmsiInterrupt - This error occurred due to "Keep me signed in" interrupt when the user was signing-in. This is an expected part of the login flow, where a user is asked if they want to remain signed into their current browser to make further logins easier. For more information, see [The new Azure AD sign-in and ΓÇ£Keep me signed inΓÇ¥ experiences rolling out now!](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/the-new-azure-ad-sign-in-and-keep-me-signed-in-experiences/m-p/128267). You can [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details.|
-| AADSTS50143 | Session mismatch - Session is invalid because user tenant does not match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. |
+| AADSTS50143 | Session mismatch - Session is invalid because user tenant doesn't match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. |
| AADSTS50144 | InvalidPasswordExpiredOnPremPassword - User's Active Directory password has expired. Generate a new password for the user or have the user use the self-service reset tool to reset their password. |
-| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or is not yet valid. |
-| AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter is not valid. |
-| AADSTS501481 | The Code_Verifier does not match the code_challenge supplied in the authorization request.|
+| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. |
+| AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter isn't valid. |
+| AADSTS501481 | The Code_Verifier doesn't match the code_challenge supplied in the authorization request.|
| AADSTS50155 | DeviceAuthenticationFailed - Device authentication failed for this user. | | AADSTS50158 | ExternalSecurityChallenge - External security challenge was not satisfied. |
-| AADSTS50161 | InvalidExternalSecurityChallengeConfiguration - Claims sent by external provider is not enough or Missing claim requested to external provider. |
+| AADSTS50161 | InvalidExternalSecurityChallengeConfiguration - Claims sent by external provider isn't enough or Missing claim requested to external provider. |
| AADSTS50166 | ExternalClaimsProviderThrottled - Failed to send the request to the claims provider. | | AADSTS50168 | ChromeBrowserSsoInterruptRequired - The client is capable of obtaining an SSO token through the Windows 10 Accounts extension, but the token was not found in the request or the supplied token was expired. |
-| AADSTS50169 | InvalidRequestBadRealm - The realm is not a configured realm of the current service namespace. |
+| AADSTS50169 | InvalidRequestBadRealm - The realm isn't a configured realm of the current service namespace. |
| AADSTS50170 | MissingExternalClaimsProviderMapping - The external controls mapping is missing. | | AADSTS50173 | FreshTokenNeeded - The provided grant has expired due to it being revoked, and a fresh auth token is needed. Either an admin or a user revoked the tokens for this user, causing subsequent token refreshes to fail and require reauthentication. Have the user sign in again. |
-| AADSTS50177 | ExternalChallengeNotSupportedForPassthroughUsers - External challenge is not supported for passthrough users. |
-| AADSTS50178 | SessionControlNotSupportedForPassthroughUsers - Session control is not supported for passthrough users. |
+| AADSTS50177 | ExternalChallengeNotSupportedForPassthroughUsers - External challenge isn't supported for passthrough users. |
+| AADSTS50178 | SessionControlNotSupportedForPassthroughUsers - Session control isn't supported for passthrough users. |
| AADSTS50180 | WindowsIntegratedAuthMissing - Integrated Windows authentication is needed. Enable the tenant for Seamless SSO. | | AADSTS50187 | DeviceInformationNotProvided - The service failed to perform device authentication. |
-| AADSTS50194 | Application '{appId}'({appName}) is not configured as a multi-tenant application. Usage of the /common endpoint is not supported for such applications created after '{time}'. Use a tenant-specific endpoint or configure the application to be multi-tenant. |
+| AADSTS50194 | Application '{appId}'({appName}) is n't configured as a multi-tenant application. Usage of the /common endpoint isn't supported for such applications created after '{time}'. Use a tenant-specific endpoint or configure the application to be multi-tenant. |
| AADSTS50196 | LoopDetected - A client loop has been detected. Check the appΓÇÖs logic to ensure that token caching is implemented, and that error conditions are handled correctly. The app has made too many of the same request in too short a period, indicating that it is in a faulty state or is abusively requesting tokens. | | AADSTS50197 | ConflictingIdentities - The user could not be found. Try signing in again. | | AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Because this is an "interaction_required" error, the client should do interactive auth. This occurs because a system webview has been used to request a token for a native application - the user must be prompted to ask if this was actually the app they meant to sign into. To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />msauth://(iOS only)<br />msauthv2://(iOS only)<br />chrome-extension:// (desktop Chrome browser only) | | AADSTS51000 | RequiredFeatureNotEnabled - The feature is disabled. | | AADSTS51001 | DomainHintMustbePresent - Domain hint must be present with on-premises security identifier or on-premises UPN. |
+| AADSTS1000104| XCB2BResourceCloudNotAllowedOnIdentityTenant - Resource cloud {resourceCloud} isn't allowed on identity tenant {identityTenant}. {resourceCloud} - cloud instance which owns the resource. {identityTenant} - is the tenant where signing-in identity is originated from. |
| AADSTS51004 | UserAccountNotInDirectory - The user account doesnΓÇÖt exist in the directory. | | AADSTS51005 | TemporaryRedirect - Equivalent to HTTP status 307, which indicates that the requested information is located at the URI specified in the location header. When you receive this status, follow the location header associated with the response. When the original request method was POST, the redirected request will also use the POST method. | | AADSTS51006 | ForceReauthDueToInsufficientAuth - Integrated Windows authentication is needed. User logged in using a session token that is missing the integrated Windows authentication claim. Request the user to log in again. | | AADSTS52004 | DelegationDoesNotExistForLinkedIn - The user has not provided consent for access to LinkedIn resources. |
-| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device is not compliant. The user must enroll their device with an approved MDM provider like Intune. |
-| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device is not domain joined. Have the user use a domain joined device. |
-| AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used is not an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
+| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. |
+| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device isn't domain joined. Have the user use a domain joined device. |
+| AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used isn't an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. | | AADSTS53004 | ProofUpBlockedDueToRisk - User needs to complete the multi-factor authentication registration process before accessing this content. User should register for multi-factor authentication. |
+| AADSTS53010 | ProofUpBlockedDueToSecurityInfoAcr - Cannot configure multi-factor authentication methods because the organization requires this information to be set from specific locations or devices. |
| AADSTS53011 | User blocked due to risk on home tenant. | | AADSTS54000 | MinorUserBlockedLegalAgeGroupRule | | AADSTS54005 | OAuth2 Authorization code was already redeemed, please retry with a new valid code or use an existing refresh token. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS65004 | UserDeclinedConsent - User declined to consent to access the app. Have the user retry the sign-in and consent to the app| | AADSTS65005 | MisconfiguredApplication - The app required resource access list does not contain apps discoverable by the resource or The client app has requested access to resource, which was not specified in its required resource access list or Graph service returned bad request or resource not found. If the app supports SAML, you may have configured the app with the wrong Identifier (Entity). To learn more, see the troubleshooting article for error [AADSTS650056](/troubleshoot/azure/active-directory/error-code-aadsts650056-misconfigured-app). | | AADSTS650052 | The app needs access to a service `(\"{name}\")` that your organization `\"{organization}\"` has not subscribed to or enabled. Contact your IT Admin to review the configuration of your service subscriptions. |
-| AADSTS650054 | The application asked for permissions to access a resource that has been removed or is no longer available. Make sure that all resources the app is calling are present in the tenant you are operating in. |
+| AADSTS650054 | The application asked for permissions to access a resource that has been removed or is no longer available. Make sure that all resources the app is calling are present in the tenant you're operating in. |
| AADSTS650056 | Misconfigured application. This could be due to one of the following: the client has not listed any permissions for '{name}' in the requested permissions in the client's application registration. Or, the admin has not consented in the tenant. Or, check the application identifier in the request to ensure it matches the configured client application identifier. Or, check the certificate in the request to ensure it's valid. Please contact your admin to fix the configuration or consent on behalf of the tenant. Client app ID: {id}. Please contact your admin to fix the configuration or consent on behalf of the tenant.|
-| AADSTS650057 | Invalid resource. The client has requested access to a resource which is not listed in the requested permissions in the client's application registration. Client app ID: {appId}({appName}). Resource value from request: {resource}. Resource app ID: {resourceAppId}. List of valid resources from app registration: {regList}. |
+| AADSTS650057 | Invalid resource. The client has requested access to a resource which isn't listed in the requested permissions in the client's application registration. Client app ID: {appId}({appName}). Resource value from request: {resource}. Resource app ID: {resourceAppId}. List of valid resources from app registration: {regList}. |
| AADSTS67003 | ActorNotValidServiceIdentity |
-| AADSTS70000 | InvalidGrant - Authentication failed. The refresh token is not valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> |
+| AADSTS70000 | InvalidGrant - Authentication failed. The refresh token isn't valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> |
| AADSTS70001 | UnauthorizedClient - The application is disabled. To learn more, see the troubleshooting article for error [AADSTS70001](/troubleshoot/azure/active-directory/error-code-aadsts70001-app-not-found-in-directory). | | AADSTS70002 | InvalidClient - Error validating the credentials. The specified client_secret does not match the expected value for this client. Correct the client_secret and try again. For more info, see [Use the authorization code to request an access token](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). | | AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. |
-| AADSTS700030 | Invalid certificate - subject name in certificate is not authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. |
+| AADSTS700030 | Invalid certificate - subject name in certificate isn't authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. |
| AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. |
-| AADSTS70005 | UnsupportedResponseType - The app returned an unsupported response type due to the following reasons:<ul><li>response type 'token' is not enabled for the app</li><li>response type 'id_token' requires the 'OpenID' scope -contains an unsupported OAuth parameter value in the encoded wctx</li></ul> |
-| AADSTS700054 | Response_type 'id_token' is not enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure Portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.|
+| AADSTS70005 | UnsupportedResponseType - The app returned an unsupported response type due to the following reasons:<ul><li>response type 'token' isn't enabled for the app</li><li>response type 'id_token' requires the 'OpenID' scope -contains an unsupported OAuth parameter value in the encoded wctx</li></ul> |
+| AADSTS700054 | Response_type 'id_token' isn't enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure Portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.|
| AADSTS70007 | UnsupportedResponseMode - The app returned an unsupported value of `response_mode` when requesting a token. | | AADSTS70008 | ExpiredOrRevokedGrant - The refresh token has expired due to inactivity. The token was issued on XXX and was inactive for a certain amount of time. |
-| AADSTS700084 | The refresh token was issued to a single page app (SPA), and therefore has a fixed, limited lifetime of {time}, which cannot be extended. It is now expired and a new sign in request must be sent by the SPA to the sign in page. The token was issued on {issueDate}.|
+| AADSTS700084 | The refresh token was issued to a single page app (SPA), and therefore has a fixed, limited lifetime of {time}, which can't be extended. It is now expired and a new sign in request must be sent by the SPA to the sign in page. The token was issued on {issueDate}.|
| AADSTS70011 | InvalidScope - The scope requested by the app is invalid. | | AADSTS70012 | MsaServerError - A server error occurred while authenticating an MSA (consumer) user. Try again. If it continues to fail, [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) | | AADSTS70016 | AuthorizationPending - OAuth 2.0 device flow error. Authorization is pending. The device will retry polling the request. |
-| AADSTS70018 | BadVerificationCode - Invalid verification code due to User typing in wrong user code for device code flow. Authorization is not approved. |
+| AADSTS70018 | BadVerificationCode - Invalid verification code due to User typing in wrong user code for device code flow. Authorization isn't approved. |
| AADSTS70019 | CodeExpired - Verification code expired. Have the user retry the sign-in. | | AADSTS70043 | The refresh token has expired or is invalid due to sign-in frequency checks by conditional access. The token was issued on {issueDate} and the maximum allowed lifetime for this request is {time}. | | AADSTS75001 | BindingSerializationError - An error occurred during SAML message binding. |
-| AADSTS75003 | UnsupportedBindingError - The app returned an error related to unsupported binding (SAML protocol response cannot be sent via bindings other than HTTP POST). |
+| AADSTS75003 | UnsupportedBindingError - The app returned an error related to unsupported binding (SAML protocol response can't be sent via bindings other than HTTP POST). |
| AADSTS75005 | Saml2MessageInvalid - Azure AD doesnΓÇÖt support the SAML request sent by the app for SSO. To learn more, see the troubleshooting article for error [AADSTS75005](/troubleshoot/azure/active-directory/error-code-aadsts75005-not-a-valid-saml-request). | | AADSTS7500514 | A supported type of SAML response was not found. The supported response types are 'Response' (in XML namespace 'urn:oasis:names:tc:SAML:2.0:protocol') or 'Assertion' (in XML namespace 'urn:oasis:names:tc:SAML:2.0:assertion'). Application error - the developer will handle this error.| | AADSTS750054 | SAMLRequest or SAMLResponse must be present as query string parameters in HTTP request for SAML Redirect binding. To learn more, see the troubleshooting article for error [AADSTS750054](/troubleshoot/azure/active-directory/error-code-aadsts750054-saml-request-not-present). |
The `error` field has several possible values - review the protocol documentatio
| AADSTS80012 | OnPremisePasswordValidationAccountLogonInvalidHours - The users attempted to log on outside of the allowed hours (this is specified in AD). | | AADSTS80013 | OnPremisePasswordValidationTimeSkew - The authentication attempt could not be completed due to time skew between the machine running the authentication agent and AD. Fix time sync issues. | | AADSTS81004 | DesktopSsoIdentityInTicketIsNotAuthenticated - Kerberos authentication attempt failed. |
-| AADSTS81005 | DesktopSsoAuthenticationPackageNotSupported - The authentication package is not supported. |
+| AADSTS81005 | DesktopSsoAuthenticationPackageNotSupported - The authentication package isn't supported. |
| AADSTS81006 | DesktopSsoNoAuthorizationHeader - No authorization header was found. |
-| AADSTS81007 | DesktopSsoTenantIsNotOptIn - The tenant is not enabled for Seamless SSO. |
+| AADSTS81007 | DesktopSsoTenantIsNotOptIn - The tenant isn't enabled for Seamless SSO. |
| AADSTS81009 | DesktopSsoAuthorizationHeaderValueWithBadFormat - Unable to validate user's Kerberos ticket. | | AADSTS81010 | DesktopSsoAuthTokenInvalid - Seamless SSO failed because the user's Kerberos ticket has expired or is invalid. | | AADSTS81011 | DesktopSsoLookupUserBySidFailed - Unable to find user object based on information in the user's Kerberos ticket. | | AADSTS81012 | DesktopSsoMismatchBetweenTokenUpnAndChosenUpn - The user trying to sign in to Azure AD is different from the user signed into the device. | | AADSTS90002 | InvalidTenantName - The tenant name wasn't found in the data store. Check to make sure you have the correct tenant ID. |
-| AADSTS90004 | InvalidRequestFormat - The request is not properly formatted. |
-| AADSTS90005 | InvalidRequestWithMultipleRequirements - Unable to complete the request. The request is not valid because the identifier and login hint can't be used together. |
+| AADSTS90004 | InvalidRequestFormat - The request isn't properly formatted. |
+| AADSTS90005 | InvalidRequestWithMultipleRequirements - Unable to complete the request. The request isn't valid because the identifier and login hint can't be used together. |
| AADSTS90006 | ExternalServerRetryableError - The service is temporarily unavailable.| | AADSTS90007 | InvalidSessionId - Bad request. The passed session ID can't be parsed. | | AADSTS90008 | TokenForItselfRequiresGraphPermission - The user or administrator hasn't consented to use the application. At the minimum, the application requires access to Azure AD by specifying the sign-in and read user profile permission. | | AADSTS90009 | TokenForItselfMissingIdenticalAppIdentifier - The application is requesting a token for itself. This scenario is supported only if the resource that's specified is using the GUID-based application ID. | | AADSTS90010 | NotSupported - Unable to create the algorithm. |
-| AADSTS9001023 |The grant type is not supported over the /common or /consumers endpoints. Please use the /organizations or tenant-specific endpoint.|
+| AADSTS9001023 |The grant type isn't supported over the /common or /consumers endpoints. Please use the /organizations or tenant-specific endpoint.|
| AADSTS90012 | RequestTimeout - The requested has timed out. |
-| AADSTS90013 | InvalidUserInput - The input from the user is not valid. |
-| AADSTS90014 | MissingRequiredField - This error code may appear in various cases when an expected field is not present in the credential. |
+| AADSTS90013 | InvalidUserInput - The input from the user isn't valid. |
+| AADSTS90014 | MissingRequiredField - This error code may appear in various cases when an expected field isn't present in the credential. |
| AADSTS900144 | The request body must contain the following parameter: '{name}'. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.| | AADSTS90015 | QueryStringTooLong - The query string is too long. | | AADSTS90016 | MissingRequiredClaim - The access token isn't valid. The required claim is missing. | | AADSTS90019 | MissingTenantRealm - Azure AD was unable to determine the tenant identifier from the request. | | AADSTS90020 | The SAML 1.1 Assertion is missing ImmutableID of the user. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.|
-| AADSTS90022 | AuthenticatedInvalidPrincipalNameFormat - The principal name format is not valid, or does not meet the expected `name[/host][@realm]` format. The principal name is required, host and realm are optional and may be set to null. |
-| AADSTS90023 | InvalidRequest - The authentication service request is not valid. |
+| AADSTS90022 | AuthenticatedInvalidPrincipalNameFormat - The principal name format isn't valid, or doesn't meet the expected `name[/host][@realm]` format. The principal name is required, host and realm are optional and may be set to null. |
+| AADSTS90023 | InvalidRequest - The authentication service request isn't valid. |
| AADSTS9002313 | InvalidRequest - Request is malformed or invalid. - The issue here is because there was something wrong with the request to a certain endpoint. The suggestion to this issue is to get a fiddler trace of the error occurring and looking to see if the request is actually properly formatted or not. | | AADSTS9002332 | Application '{principalId}'({principalName}) is configured for use by Azure Active Directory users only. Please do not use the /consumers endpoint to serve this request. | | AADSTS90024 | RequestBudgetExceededError - A transient error has occurred. Try again. | | AADSTS90027 | We are unable to issue tokens from this API version on the MSA tenant. Please contact the application vendor as they need to use version 2.0 of the protocol to support this.|
-| AADSTS90033 | MsodsServiceUnavailable - The Microsoft Online Directory Service (MSODS) is not available. |
+| AADSTS90033 | MsodsServiceUnavailable - The Microsoft Online Directory Service (MSODS) isn't available. |
| AADSTS90036 | MsodsServiceUnretryableFailure - An unexpected, non-retryable error from the WCF service hosted by MSODS has occurred. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to get more details on the error. | | AADSTS90038 | NationalCloudTenantRedirection - The specified tenant 'Y' belongs to the National Cloud 'X'. Current cloud instance 'Z' does not federate with X. A cloud redirect error is returned. | | AADSTS90043 | NationalCloudAuthCodeRedirection - The feature is disabled. |
-| AADSTS900432 | Confidential Client is not supported in Cross Cloud request.|
+| AADSTS900432 | Confidential Client isn't supported in Cross Cloud request.|
| AADSTS90051 | InvalidNationalCloudId - The national cloud identifier contains an invalid cloud identifier. | | AADSTS90055 | TenantThrottlingError - There are too many incoming requests. This exception is thrown for blocked tenants. | | AADSTS90056 | BadResourceRequest - To redeem the code for an access token, the app should send a POST request to the `/token` endpoint. Also, prior to this, you should provide an authorization code and send it in the POST request to the `/token` endpoint. Refer to this article for an overview of OAuth 2.0 authorization code flow: [../azuread-dev/v1-protocols-oauth-code.md](../azuread-dev/v1-protocols-oauth-code.md). Direct the user to the `/authorize` endpoint, which will return an authorization_code. By posting a request to the `/token` endpoint, the user gets the access token. Log in the Azure portal, and check **App registrations > Endpoints** to confirm that the two endpoints were configured correctly. | | AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. |
-| AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message is not valid. |
+| AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message isn't valid. |
| AADSTS90082 | OrgIdWsFederationNotSupported - The selected authentication policy for the request isn't currently supported. | | AADSTS90084 | OrgIdWsFederationGuestNotAllowed - Guest accounts aren't allowed for this site. | | AADSTS90085 | OrgIdWsFederationSltRedemptionFailed - The service is unable to issue a token because the company object hasn't been provisioned yet. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS90092 | GraphNonRetryableError | | AADSTS90093 | GraphUserUnauthorized - Graph returned with a forbidden error code for the request. | | AADSTS90094 | AdminConsentRequired - Administrator consent is required. |
-| AADSTS900382 | Confidential Client is not supported in Cross Cloud request. |
+| AADSTS900382 | Confidential Client isn't supported in Cross Cloud request. |
| AADSTS90095 | AdminConsentRequiredRequestAccess- In the Admin Consent Workflow experience, an interrupt that appears when the user is told they need to ask the admin for consent. | | AADSTS90099 | The application '{appId}' ({appName}) has not been authorized in the tenant '{tenant}'. Applications must be authorized to access the customer tenant before partner delegated administrators can use them. Provide pre-consent or execute the appropriate Partner Center API to authorize the application. | | AADSTS900971| No reply address provided.| | AADSTS90100 | InvalidRequestParameter - The parameter is empty or not valid. |
-| AADSTS901002 | AADSTS901002: The 'resource' request parameter is not supported. |
+| AADSTS901002 | AADSTS901002: The 'resource' request parameter isn't supported. |
| AADSTS90101 | InvalidEmailAddress - The supplied data isn't a valid email address. The email address must be in the format `someone@example.com`. | | AADSTS90102 | InvalidUriParameter - The value must be a valid absolute URI. |
-| AADSTS90107 | InvalidXml - The request is not valid. Make sure your data doesn't have invalid characters.|
+| AADSTS90107 | InvalidXml - The request isn't valid. Make sure your data doesn't have invalid characters.|
| AADSTS90114 | InvalidExpiryDate - The bulk token expiration timestamp will cause an expired token to be issued. | | AADSTS90117 | InvalidRequestInput | | AADSTS90119 | InvalidUserCode - The user code is null or empty.| | AADSTS90120 | InvalidDeviceFlowRequest - The request was already authorized or declined. | | AADSTS90121 | InvalidEmptyRequest - Invalid empty request.| | AADSTS90123 | IdentityProviderAccessDenied - The token can't be issued because the identity or claim issuance provider denied the request. |
-| AADSTS90124 | V1ResourceV2GlobalEndpointNotSupported - The resource is not supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
+| AADSTS90124 | V1ResourceV2GlobalEndpointNotSupported - The resource isn't supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
| AADSTS90125 | DebugModeEnrollTenantNotFound - The user isn't in the system. Make sure you entered the user name correctly. |
-| AADSTS90126 | DebugModeEnrollTenantNotInferred - The user type is not supported on this endpoint. The system can't infer the user's tenant from the user name. |
-| AADSTS90130 | NonConvergedAppV2GlobalEndpointNotSupported - The application is not supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
+| AADSTS90126 | DebugModeEnrollTenantNotInferred - The user type isn't supported on this endpoint. The system can't infer the user's tenant from the user name. |
+| AADSTS90130 | NonConvergedAppV2GlobalEndpointNotSupported - The application isn't supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
| AADSTS120000 | PasswordChangeIncorrectCurrentPassword | | AADSTS120002 | PasswordChangeInvalidNewPasswordWeak | | AADSTS120003 | PasswordChangeInvalidNewPasswordContainsMemberName |
The `error` field has several possible values - review the protocol documentatio
| AADSTS130008 | NgcDeviceIsNotFound - The device referenced by the NGC key wasn't found. | | AADSTS135010 | KeyNotFound | | AADSTS135011 | Device used during the authentication is disabled.|
-| AADSTS140000 | InvalidRequestNonce - Request nonce is not provided. |
-| AADSTS140001 | InvalidSessionKey - The session key is not valid.|
+| AADSTS140000 | InvalidRequestNonce - Request nonce isn't provided. |
+| AADSTS140001 | InvalidSessionKey - The session key isn't valid.|
| AADSTS165004 | Actual message content is runtime specific. Please see returned exception message for details. | | AADSTS165900 | InvalidApiRequest - Invalid request. |
-| AADSTS220450 | UnsupportedAndroidWebViewVersion - The Chrome WebView version is not supported. |
+| AADSTS220450 | UnsupportedAndroidWebViewVersion - The Chrome WebView version isn't supported. |
| AADSTS220501 | InvalidCrlDownload |
-| AADSTS221000 | DeviceOnlyTokensNotSupportedByResource - The resource is not configured to accept device-only tokens. |
+| AADSTS221000 | DeviceOnlyTokensNotSupportedByResource - The resource isn't configured to accept device-only tokens. |
| AADSTS240001 | BulkAADJTokenUnauthorized - The user isn't authorized to register devices in Azure AD. | | AADSTS240002 | RequiredClaimIsMissing - The id_token can't be used as `urn:ietf:params:oauth:grant-type:jwt-bearer` grant.| | AADSTS530032 | BlockedByConditionalAccessOnSecurityPolicy - The tenant admin has configured a security policy that blocks this request. Check the security policies that are defined on the tenant level to determine if your request meets the policy requirements. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS1000000 | UserNotBoundError - The Bind API requires the Azure AD user to also authenticate with an external IDP, which hasn't happened yet. | | AADSTS1000002 | BindCompleteInterruptError - The bind completed successfully, but the user must be informed. | | AADSTS100007 | AAD Regional ONLY supports auth either for MSIs OR for requests from MSAL using SN+I for 1P apps or 3P apps in Microsoft infrastructure tenants.|
-| AADSTS1000031 | Application {appDisplayName} cannot be accessed at this time. Contact your administrator. |
+| AADSTS1000031 | Application {appDisplayName} can't be accessed at this time. Contact your administrator. |
| AADSTS7000112 | UnauthorizedClientApplicationDisabled - The application is disabled. |
-| AADSTS7000114| Application 'appIdentifier' is not allowed to make application on-behalf-of calls.|
-| AADSTS7500529 | The value ΓÇÿSAMLId-GuidΓÇÖ is not a valid SAML ID - Azure AD uses this attribute to populate the InResponseTo attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "id" to the string representation of a GUID. For example, id6c1c178c166d486687be4aaf5e482730 is a valid ID. |
+| AADSTS7000114| Application 'appIdentifier' isn't allowed to make application on-behalf-of calls.|
+| AADSTS7500529 | The value ΓÇÿSAMLId-GuidΓÇÖ isn't a valid SAML ID - Azure AD uses this attribute to populate the InResponseTo attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "id" to the string representation of a GUID. For example, id6c1c178c166d486687be4aaf5e482730 is a valid ID. |
## Next steps
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
Previously updated : 05/25/2021 Last updated : 06/10/2022
When a client acquires an access token to access a protected resource, the clien
Before reading through this article, it's recommended that you go through the following articles:
-* [ID tokens](id-tokens.md) in the Microsoft identity platform.
-* [Access tokens](access-tokens.md) in the Microsoft identity platform.
+- [ID tokens](id-tokens.md) in the Microsoft identity platform.
+- [Access tokens](access-tokens.md) in the Microsoft identity platform.
## Refresh token lifetime
-Refresh tokens have a longer lifetime than access tokens. The default lifetime for the refresh tokens is 24 hours for [single page apps](reference-third-party-cookies-spas.md) and 90 days for all other scenarios. Refresh tokens replace themselves with a fresh token upon every use. The Microsoft identity platform doesn't revoke old refresh tokens when used to fetch new access tokens. Securely delete the old refresh token after acquiring a new one. Refresh tokens need to be stored safely like access tokens or application credentials.
+Refresh tokens have a longer lifetime than access tokens. The default lifetime for the refresh tokens is 24 hours for [single page apps](reference-third-party-cookies-spas.md) and 90 days for all other scenarios. Refresh tokens replace themselves with a fresh token upon every use. The Microsoft identity platform doesn't revoke old refresh tokens when used to fetch new access tokens. Securely delete the old refresh token after acquiring a new one. Refresh tokens need to be stored safely like access tokens or application credentials.
->[!IMPORTANT]
-> Refresh tokens sent to a redirect URI registered as `spa` expire after 24 hours. Additional refresh tokens acquired using the initial refresh token carry over that expiration time, so apps must be prepared to rerun the authorization code flow using an interactive authentication to get a new refresh token every 24 hours. Users do not have to enter their credentials and usually don't even see any related user experience, just a reload of your application. The browser must visit the log-in page in a top-level frame to show the login session. This is due to [privacy features in browsers that block third party cookies](reference-third-party-cookies-spas.md).
+> [!IMPORTANT]
+> Refresh tokens sent to a redirect URI registered as `spa` expire after 24 hours. Additional refresh tokens acquired using the initial refresh token carry over that expiration time, so apps must be prepared to rerun the authorization code flow using an interactive authentication to get a new refresh token every 24 hours. Users don't have to enter their credentials and usually don't even see any related user experience, just a reload of your application. The browser must visit the log-in page in a top-level frame to show the login session. This is due to [privacy features in browsers that block third party cookies](reference-third-party-cookies-spas.md).
## Refresh token expiration
-Refresh tokens can be revoked at any time, because of timeouts and revocations. Your app must handle rejections by the sign-in service gracefully when this occurs. This is done by sending the user to an interactive sign-in prompt to sign in again.
+Refresh tokens can be revoked at any time, because of timeouts and revocations. Your app must handle rejections by the sign-in service gracefully when this occurs. This is done by sending the user to an interactive sign-in prompt to sign in again.
### Token timeouts You can't configure the lifetime of a refresh token. You can't reduce or lengthen their lifetime. Configure sign-in frequency in Conditional Access to define the time periods before a user is required to sign in again. Learn more about [Configuring authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
-Not all refresh tokens follow the rules set in the token lifetime policy. Specifically, refresh tokens used in [single page apps](reference-third-party-cookies-spas.md) are always fixed to 24 hours of activity, as if they have a `MaxAgeSessionSingleFactor` policy of 24 hours applied to them.
+Not all refresh tokens follow the rules set in the token lifetime policy. Specifically, refresh tokens used in [single page apps](reference-third-party-cookies-spas.md) are always fixed to 24 hours of activity, as if they have a `MaxAgeSessionSingleFactor` policy of 24 hours applied to them.
### Revocation
-Refresh tokens can be revoked by the server because of a change in credentials, user action, or admin action. Refresh tokens fall into two classes: tokens issued to confidential clients (the rightmost column) and tokens issued to public clients (all other columns).
+Refresh tokens can be revoked by the server because of a change in credentials, user action, or admin action. Refresh tokens fall into two classes: tokens issued to confidential clients (the rightmost column) and tokens issued to public clients (all other columns).
-| Change | Password-based cookie | Password-based token | Non-password-based cookie | Non-password-based token | Confidential client token |
-||--|-||--||
-| Password expires | Stays alive | Stays alive | Stays alive | Stays alive | Stays alive |
-| Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| User revokes their refresh tokens [via PowerShell](/powershell/module/azuread/revoke-azureadsignedinuserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
-| Admin revokes all refresh tokens for a user [via PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) | Revoked | Revoked |Revoked | Revoked | Revoked |
-| Single sign-out [on web](v2-protocols-oidc.md#single-sign-out) | Revoked | Stays alive | Revoked | Stays alive | Stays alive |
+| Change | Password-based cookie | Password-based token | Non-password-based cookie | Non-password-based token | Confidential client token |
+| -- | | -- | - | | - |
+| Password expires | Stays alive | Stays alive | Stays alive | Stays alive | Stays alive |
+| Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
+| User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
+| Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
+| User revokes their refresh tokens [via PowerShell](/powershell/module/azuread/revoke-azureadsignedinuserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
+| Admin revokes all refresh tokens for a user [via PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
+| Single sign-out [on web](v2-protocols-oidc.md#single-sign-out) | Revoked | Stays alive | Revoked | Stays alive | Stays alive |
## Next steps
-* Learn about [configurable token lifetimes](active-directory-configurable-token-lifetimes.md)
-* Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) for more details on primary refresh tokens.
+- Learn about [configurable token lifetimes](active-directory-configurable-token-lifetimes.md)
+- Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) for more details on primary refresh tokens.
active-directory Hybrid Azuread Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-plan.md
As a first planning step, you should review your environment and determine wheth
- Hybrid Azure AD join isn't supported for Windows Server running the Domain Controller (DC) role. - Hybrid Azure AD join isn't supported on Windows down-level devices when using credential roaming or user profile roaming or mandatory profile. - Server Core OS doesn't support any type of device registration.-- User State Migration Tool (USMT) doesn't work with device registration.
+- User State Migration Tool (USMT) doesn't work with device registration.
### OS imaging considerations
As a first planning step, you should review your environment and determine wheth
If your Windows 10 or newer domain joined devices are [Azure AD registered](concept-azure-ad-register.md) to your tenant, it could lead to a dual state of hybrid Azure AD joined and Azure AD registered device. We recommend upgrading to Windows 10 1803 (with KB4489894 applied) or newer to automatically address this scenario. In pre-1803 releases, you'll need to remove the Azure AD registered state manually before enabling hybrid Azure AD join. In 1803 and above releases, the following changes have been made to avoid this dual state: - Any existing Azure AD registered state for a user would be automatically removed <i>after the device is hybrid Azure AD joined and the same user logs in</i>. For example, if User A had an Azure AD registered state on the device, the dual state for User A is cleaned up only when User A logs in to the device. If there are multiple users on the same device, the dual state is cleaned up individually when those users log in. After removing the Azure AD registered state, Windows 10 will unenroll the device from Intune or other MDM, if the enrollment happened as part of the Azure AD registration via auto-enrollment.-- Azure AD registered state on any local accounts on the device isnΓÇÖt impacted by this change. Only applicable to domain accounts. Azure AD registered state on local accounts isn't removed automatically even after user logon, since the user isn't a domain user.
+- Azure AD registered state on any local accounts on the device isnΓÇÖt impacted by this change. Only applicable to domain accounts. Azure AD registered state on local accounts isn't removed automatically even after user logon, since the user isn't a domain user.
- You can prevent your domain joined device from being Azure AD registered by adding the following registry value to HKLM\SOFTWARE\Policies\Microsoft\Windows\WorkplaceJoin: "BlockAADWorkplaceJoin"=dword:00000001. - In Windows 10 1803, if you have Windows Hello for Business configured, the user needs to reconfigure Windows Hello for Business after the dual state cleanup. This issue has been addressed with KB4512509.
If your Windows 10 or newer domain joined devices are [Azure AD registered](conc
To register devices as hybrid Azure AD join to respective tenants, organizations need to ensure that the SCP configuration is done on the devices and not in AD. More details on how to accomplish this task can be found in the article [Hybrid Azure AD join targeted deployment](hybrid-azuread-join-control.md). It's important for organizations to understand that certain Azure AD capabilities won't work in a single forest, multiple Azure AD tenants configurations. -- [Device writeback](../hybrid/how-to-connect-device-writeback.md) won't work. This configuration affects [Device based Conditional Access for on-premise apps that are federated using ADFS](/windows-server/identity/ad-fs/operations/configure-device-based-conditional-access-on-premises). This configuration also affects [Windows Hello for Business deployment when using the Hybrid Cert Trust model](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust).
+- [Device writeback](../hybrid/how-to-connect-device-writeback.md) won't work. This configuration affects [Device based Conditional Access for on-premises apps that are federated using ADFS](/windows-server/identity/ad-fs/operations/configure-device-based-conditional-access-on-premises). This configuration also affects [Windows Hello for Business deployment when using the Hybrid Cert Trust model](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust).
- [Groups writeback](../hybrid/how-to-connect-group-writeback.md) won't work. This configuration affects writeback of Office 365 Groups to a forest with Exchange installed. - [Seamless SSO](../hybrid/how-to-connect-sso.md) won't work. This configuration affects SSO scenarios that organizations may be using on cross OS or browser platforms, for example iOS or Linux with Firefox, Safari, or Chrome without the Windows 10 extension. - [Hybrid Azure AD join for Windows down-level devices in managed environment](./hybrid-azuread-join-managed-domains.md#enable-windows-down-level-devices) won't work. For example, hybrid Azure AD join on Windows Server 2012 R2 in a managed environment requires Seamless SSO and since Seamless SSO won't work, hybrid Azure AD join for such a setup won't work.
To register devices as hybrid Azure AD join to respective tenants, organizations
- If your environment uses virtual desktop infrastructure (VDI), see [Device identity and desktop virtualization](./howto-device-identity-virtual-desktop-infrastructure.md). -- Hybrid Azure AD join is supported for FIPS-compliant TPM 2.0 and not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with hybrid Azure AD join. Microsoft doesn't provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Contact your hardware OEM for support.
+- Hybrid Azure AD join is supported for FIPS-compliant TPM 2.0 and not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with hybrid Azure AD join. Microsoft doesn't provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Contact your hardware OEM for support.
- Starting from Windows 10 1903 release, TPMs 1.2 aren't used with hybrid Azure AD join and devices with those TPMs will be considered as if they don't have a TPM.
Organizations may want to do a targeted rollout of hybrid Azure AD join before e
## Select your scenario based on your identity infrastructure
-Hybrid Azure AD join works with both, managed and federated environments depending on whether the UPN is routable or non-routable. See bottom of the page for table on supported scenarios.
+Hybrid Azure AD join works with both, managed and federated environments depending on whether the UPN is routable or non-routable. See bottom of the page for table on supported scenarios.
### Managed environment
These scenarios don't require you to configure a federation server for authentic
A federated environment should have an identity provider that supports the following requirements. If you have a federated environment using Active Directory Federation Services (AD FS), then the below requirements are already supported. - **WIAORMULTIAUTHN claim:** This claim is required to do hybrid Azure AD join for Windows down-level devices.-- **WS-Trust protocol:** This protocol is required to authenticate Windows current hybrid Azure AD joined devices with Azure AD.
-When you're using AD FS, you need to enable the following WS-Trust endpoints:
- `/adfs/services/trust/2005/windowstransport`
- `/adfs/services/trust/13/windowstransport`
- `/adfs/services/trust/2005/usernamemixed`
+- **WS-Trust protocol:** This protocol is required to authenticate Windows current hybrid Azure AD joined devices with Azure AD.
+When you're using AD FS, you need to enable the following WS-Trust endpoints:
+ `/adfs/services/trust/2005/windowstransport`
+ `/adfs/services/trust/13/windowstransport`
+ `/adfs/services/trust/2005/usernamemixed`
`/adfs/services/trust/13/usernamemixed`
- `/adfs/services/trust/2005/certificatemixed`
- `/adfs/services/trust/13/certificatemixed`
+ `/adfs/services/trust/2005/certificatemixed`
+ `/adfs/services/trust/13/certificatemixed`
-> [!WARNING]
+> [!WARNING]
> Both **adfs/services/trust/2005/windowstransport** or **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**.
-Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-azuread-join-manual.md).
+Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-azuread-join-manual.md).
## Review on-premises AD users UPN support for hybrid Azure AD join
active-directory Access Reviews External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-external-users.md
In addition to the option of removing unwanted external identities from resource
![upon completion settings](media/access-reviews-external-users/upon-completion-settings.png)
-When creating a new Access Review, in the ΓÇ£Upon completion settingsΓÇ¥ section, for **Action to apply on denied users** you can define **Block users from signing-in for 30 days, then remove user from the tenant**.
+When creating a new Access Review, choose the **Select Teams + groups** option and limit the scope to **Guest users only**. In the ΓÇ£Upon completion settingsΓÇ¥ section, for **Action to apply on denied users** you can define **Block users from signing-in for 30 days, then remove user from the tenant**.
This setting allows you to identify, block, and delete external identities from your Azure AD tenant. External identities who are reviewed and denied continued access by the reviewer will be blocked and deleted, irrespective of the resource access or group membership they have. This setting is best used as a last step after you have validated that the external users in-review no longer carries resource access and can safely be removed from your tenant or if you want to make sure they are removed, irrespective of their standing access. The ΓÇ£Disable and deleteΓÇ¥ feature blocks the external user first, taking away their ability to signing into your tenant and accessing resources. Resource access is not revoked in this stage, and in case you wanted to reinstantiate the external user, their ability to log on can be reconfigured. Upon no further action, a blocked external identity will be deleted from the directory after 30 days, removing the account as well as their access.
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
These triggers to Logic Apps are controlled in a new tab within access package p
1. In the left menu, select **Catalogs**.
-1. In the left menu, select **Custom Extensions (Preview)**.
+1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions (Preview)**.
1. In the header navigation bar, select **Add a Custom Extension**.
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md
In order to revoke the old Token Signing Certificate which AD FS is currently us
1. Connect to the Microsoft Online Service `PS C:\>Connect-MsolService`
- 2. Document both your on-premise and cloud Token Signing Certificate thumbprint and expiration dates.
-`PS C:\>Get-MsolFederationProperty -DomainName <domain>`
+ 2. Document both your on-premises and cloud Token Signing Certificate thumbprint and expiration dates.
+`PS C:\>Get-MsolFederationProperty -DomainName <domain>`
3. Copy down the thumbprint. It will be used later to remove the existing certificates.
-You can also get the thumbprint by using AD FS Management, navigating to Service/Certificates, right-clicking on the certificate, select View certificate and then selecting Details.
+You can also get the thumbprint by using AD FS Management, navigating to Service/Certificates, right-clicking on the certificate, select View certificate and then selecting Details.
## Determine whether AD FS renews the certificates automatically By default, AD FS is configured to generate token signing and token decryption certificates automatically, both at the initial configuration time and when the certificates are approaching their expiration date.
The AutoCertificateRollover property describes whether AD FS is configured to re
## Generating new self-signed certificate if AutoCertificateRollover is set to TRUE
-In this section, you will be creating **two** token-signing certificates. The first will use the **-urgent** flag, which will replace the current primary certificate immediately. The second will be used for the secondary certificate.
+In this section, you will be creating **two** token-signing certificates. The first will use the **-urgent** flag, which will replace the current primary certificate immediately. The second will be used for the secondary certificate.
>[!IMPORTANT] >The reason we are creating two certificates is because Azure holds on to information regarding the previous certificate. By creating a second one, we are forcing Azure to release information about the old certificate and replace it with information about the second certificate.
In this section, you will be creating **two** token-signing certificates. The f
You can use the following steps to generate the new token-signing certificates. 1. Ensure that you are logged on to the primary AD FS server.
- 2. Open Windows PowerShell as an administrator.
+ 2. Open Windows PowerShell as an administrator.
3. Check to make sure that your AutoCertificateRollover is set to True. `PS C:\>Get-AdfsProperties | FL AutoCert*, Certificate*` 4. To generate a new token signing certificate: `Update-ADFSCertificate ΓÇôCertificateType token-signing -Urgent`.
Now that the new certificate has been imported and configured in AD FS, you need
2. Expand **Service** and then select **Certificates**. 3. Click the secondary token signing certificate. 4. In the **Actions** pane, click **Set As Primary**. Click Yes at the confirmation prompt.
-5. Once you promoted the new certificate as the primary certificate, you should remove the old certificate because it can still be used. See the [Remove your old certificates](#remove-your-old-certificates) section below.
+5. Once you promoted the new certificate as the primary certificate, you should remove the old certificate because it can still be used. See the [Remove your old certificates](#remove-your-old-certificates) section below.
### To configure the second certificate as a secondary certificate Now that you have added the first certificate and made it primary and removed the old one, import the second certificate. Then you must configure the certificate as the secondary AD FS token signing certificate
To update the certificate information in Azure AD, run the following command: `U
> If you see an error when running this command, run the following command: `Update-MsolFederatedDomain ΓÇôSupportMultipleDomain`, and then enter the domain name when prompted. ## Replace SSL certificates
-In the event that you need to replace your token-signing certificate because of a compromise, you should also revoke and replace the SSL certificates for AD FS and your WAP servers.
+In the event that you need to replace your token-signing certificate because of a compromise, you should also revoke and replace the SSL certificates for AD FS and your WAP servers.
Revoking your SSL certificates must be done at the certificate authority (CA) that issued the certificate. These certificates are often issued by 3rd party providers such as GoDaddy. For an example, see (Revoke a certificate | SSL Certificates - GoDaddy Help US). For more information, see [How Certificate Revocation Works](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619754(v=ws.10)).
Once the old SSL certificate has been revoked and a new one issued, you can repl
Once you have replaced your old certificates, you should remove the old certificate because it can still be used. To do this, follow the steps below: 1. Ensure that you are logged on to the primary AD FS server.
-2. Open Windows PowerShell as an administrator.
+2. Open Windows PowerShell as an administrator.
4. To remove the old token signing certificate: `Remove-ADFSCertificate ΓÇôCertificateType token-signing -thumbprint <thumbprint>`. ## Updating federation partners who can consume Federation Metadata
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
Title: Secure hybrid access
-description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD.
+description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD.
You can now protect your on-premises and cloud legacy authentication application
- [Azure AD Application Proxy](#secure-hybrid-access-through-azure-ad-application-proxy) -- [Secure hybrid access partners](#secure-hybrid-access-through-azure-ad-partner-integrations)
+- [Secure hybrid access: Secure legacy apps with Azure Active Directory](#secure-hybrid-access-secure-legacy-apps-with-azure-active-directory)
+ - [Secure hybrid access through Azure AD Application Proxy](#secure-hybrid-access-through-azure-ad-application-proxy)
+ - [Secure hybrid access through Azure AD partner integrations](#secure-hybrid-access-through-azure-ad-partner-integrations)
You can bridge the gap and strengthen your security posture across all applications with Azure AD capabilities like [Azure AD Conditional Access](../conditional-access/overview.md) and [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). By having Azure AD as an Identity provider (IDP), you can use modern authentication and authorization methods like [single sign-on (SSO)](what-is-single-sign-on.md) and [multifactor authentication (MFA)](../authentication/concept-mfa-howitworks.md) to secure your on-premises legacy applications. ## Secure hybrid access through Azure AD Application Proxy
-
-Using [Application Proxy](../app-proxy/what-is-application-proxy.md) you can provide [secure remote access](../app-proxy/application-proxy-add-on-premises-application.md) to your on-premises web applications. Your users donΓÇÖt need to use a VPN. Users benefit by easily connecting to their applications from any device after a [SSO](../app-proxy/application-proxy-config-sso-how-to.md#how-to-configure-single-sign-on). Application Proxy provides remote access as a service and allows you to [easily publish your on-premise applications](../app-proxy/application-proxy-add-on-premises-application.md) to users outside the corporate network. It helps you scale your cloud access management without requiring you to modify your on-premises applications. [Plan an Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) deployment as a next step.
-## Secure hybrid access through Azure AD partner integrations
+Using [Application Proxy](../app-proxy/what-is-application-proxy.md) you can provide [secure remote access](../app-proxy/application-proxy-add-on-premises-application.md) to your on-premises web applications. Your users donΓÇÖt need to use a VPN. Users benefit by easily connecting to their applications from any device after a [SSO](../app-proxy/application-proxy-config-sso-how-to.md#how-to-configure-single-sign-on). Application Proxy provides remote access as a service and allows you to [easily publish your applications](../app-proxy/application-proxy-add-on-premises-application.md) to users outside the corporate network. It helps you scale your cloud access management without requiring you to modify your on-premises applications. [Plan an Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) deployment as a next step.
+
+## Secure hybrid access through Azure AD partner integrations
In addition to [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md), Microsoft partners with third-party providers to enable secure access to your on-premises applications and applications that use legacy authentication. ![Illustration of Secure Hybrid Access partner integrations and Application Proxy providing access to legacy and on-premises applications after authentication with Azure AD.](./media/secure-hybrid-access/secure-hybrid-access.png)
-The following partners offer pre-built solutions to support **conditional access policies per application** and provide detailed guidance for integrating with Azure AD.
+The following partners offer pre-built solutions to support **conditional access policies per application** and provide detailed guidance for integrating with Azure AD.
- [Akamai Enterprise Application Access](../saas-apps/akamai-tutorial.md) -- [Citrix Application Delivery Controller (ADC)](../saas-apps/citrix-netscaler-tutorial.md)
+- [Citrix Application Delivery Controller (ADC)](../saas-apps/citrix-netscaler-tutorial.md)
- [Datawiza Access Broker](../manage-apps/datawiza-with-azure-ad.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
A common challenge for developers is the management of secrets, credentials, certificates, keys etc used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
-While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having manage any credentials.
+While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having to manage any credentials.
The following video shows how you can use managed identities:</br>
For using Managed identities, you have should do the following:
1. Create a managed identity in Azure. You can choose between system-assigned managed identity or user-assigned managed identity. 2. In case of user-assigned managed identity, assign the managed identity to the "source" Azure Resource, such as an Azure Logic App or an Azure Web App. 3. Authorize the managed identity to have accees to the "target" service.
-4. Use the managed identity to perform access. For this, you can use the Azure SDK with the Azure.Identity library. Some "source" resources offer connectors that know how to use Managed identities for the connections. In that case you simply use the ideantity as a feature of that "source" resource.
+4. Use the managed identity to perform access. For this, you can use the Azure SDK with the Azure.Identity library. Some "source" resources offer connectors that know how to use Managed identities for the connections. In that case you simply use the identity as a feature of that "source" resource.
## What Azure services support the feature?<a name="which-azure-services-support-managed-identity"></a>
active-directory Cisco Umbrella User Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md
# Tutorial: Configure Cisco Umbrella User Management for automatic user provisioning
-This tutorial describes the steps you need to perform in both Cisco Umbrella User Management and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Cisco Umbrella User Management](https://umbrella.cisco.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Cisco Umbrella User Management and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Cisco Umbrella User Management](https://umbrella.cisco.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both Cisco Umbrella Use
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A [Cisco Umbrella subscription](https://signup.umbrella.com). * A user account in Cisco Umbrella with full admin permissions. ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-1. Determine what data to [map between Azure AD and Cisco Umbrella User Management](../app-provisioning/customize-application-attributes.md).
+1. Determine what data to [map between Azure AD and Cisco Umbrella User Management](../app-provisioning/customize-application-attributes.md).
## Step 2. Import ObjectGUID attribute via Azure AD Connect (Optional) If you have previously provisioned user identities from on-premise AD to Cisco Umbrella and would now like to provision the same users from Azure AD, you will need to synchronize the ObjectGUID attribute so that previously provisioned identities persist in the Umbrella reporting. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD. > [!NOTE] > The on-premise Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
-
-When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not synchronized from on-premise AD to Azure AD by default. To synchronize this attribute, enable the optional **Directory Extension attribute sync** and select the objectGUID attributes for users.
+
+When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not synchronized from on-premises AD to Azure AD by default. To synchronize this attribute, enable the optional **Directory Extension attribute sync** and select the objectGUID attributes for users.
![Azure Active Directory Connect wizard Optional features page](./media/cisco-umbrella-user-management-provisioning-tutorial/active-directory-connect-directory-extension-attribute-sync.png)
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not
1. Log in to [Cisco Umbrella dashboard](https://login.umbrella.com ). Navigate to **Deployments** > **Core Identities** > **Users and Groups**.
-
+ 1. Expand the Azure Active Directory card and click on the **API Keys page**. ![Api](./media/cisco-umbrella-user-management-provisioning-tutorial/keys.png)
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not
![Generate](./media/cisco-umbrella-user-management-provisioning-tutorial/token.png)
-1. The generated token will be displayed only once. Copy and save the URL and the token. These values will be entered in the **Tenant URL** and **Secret Token** fields respectively in the Provisioning tab of your Cisco Umbrella User Management application in the Azure portal.
+1. The generated token will be displayed only once. Copy and save the URL and the token. These values will be entered in the **Tenant URL** and **Secret Token** fields respectively in the Provisioning tab of your Cisco Umbrella User Management application in the Azure portal.
## Step 4. Add Cisco Umbrella User Management from the Azure AD application gallery
-Add Cisco Umbrella User Management from the Azure AD application gallery to start managing provisioning to Cisco Umbrella User Management. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add Cisco Umbrella User Management from the Azure AD application gallery to start managing provisioning to Cisco Umbrella User Management. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-## Step 5. Define who will be in scope for provisioning
+## Step 5. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). * If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 6. Configure automatic user provisioning to Cisco Umbrella User Management
+## Step 6. Configure automatic user provisioning to Cisco Umbrella User Management
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Cisco Umbrella User Management based on user and/or group assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
## Step 7. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment: * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Connector Limitations
-* Cisco Umbrella User Management supports provisioning a maximum of 200 groups. Any groups beyond this number that are in scope may not be provisioned to Cisco Umbrella.
+* Cisco Umbrella User Management supports provisioning a maximum of 200 groups. Any groups beyond this number that are in scope may not be provisioned to Cisco Umbrella.
## Additional resources
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal:
`https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/saml/login`. c. In the **Sign on URL** box, enter a URL in the pattern
- `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/login`.
+ `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port>/remote/saml/login`.
d. In the **Logout URL** box, enter a URL in the pattern `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port><FQDN>/remote/saml/logout`.
To complete these steps, you'll need the values you recorded earlier:
set single-sign-on-url < Reply URL Reply URL> set single-logout-url <Logout URL> set idp-entity-id <Azure AD Identifier>
+ set idp-single-sign-on-url <Azure AD Identifier>
set idp-single-logout-url <Azure Logout URL> set idp-cert <Base64 SAML Certificate Name> set user-name username
active-directory Qliksense Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qliksense-enterprise-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Qlik Sense Enterprise | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Qlik Sense Enterprise.
+ Title: 'Tutorial: Azure AD SSO integration with Qlik Sense Enterprise Client-Managed'
+description: Learn how to configure single sign-on between Azure Active Directory and Qlik Sense Enterprise Client-Managed.
Previously updated : 12/28/2020 Last updated : 06/13/2022
-# Tutorial: Integrate Qlik Sense Enterprise with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Qlik Sense Enterprise Client-Managed
-In this tutorial, you'll learn how to integrate Qlik Sense Enterprise with Azure Active Directory (Azure AD). When you integrate Qlik Sense Enterprise with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Qlik Sense Enterprise Client-Managed with Azure Active Directory (Azure AD). When you integrate Qlik Sense Enterprise Client-Managed with Azure AD, you can:
* Control in Azure AD who has access to Qlik Sense Enterprise. * Enable your users to be automatically signed-in to Qlik Sense Enterprise with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
+Note that there are two versions of Qlik Sense Enterprise. While this tutorial covers integration with the client-managed releases, a different process is required for Qlik Sense Enterprise SaaS (Qlik Cloud version).
## Prerequisites To get started, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
* Qlik Sense Enterprise single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Qlik Sense Enterprise supports **SP** initiated SSO. * Qlik Sense Enterprise supports **just-in-time provisioning**
-## Adding Qlik Sense Enterprise from the gallery
+## Add Qlik Sense Enterprise from the gallery
To configure the integration of Qlik Sense Enterprise into Azure AD, you need to add Qlik Sense Enterprise from the gallery to your list of managed SaaS apps.
To configure and test Azure AD SSO with Qlik Sense Enterprise, perform the follo
1. **[Create Qlik Sense Enterprise test user](#create-qlik-sense-enterprise-test-user)** - to have a counterpart of Britta Simon in Qlik Sense Enterprise that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign-on URL** textbox, type a URL using the following pattern: `https://<Fully Qualified Domain Name>:443{/virtualproxyprefix}/hub`
-
- b. In the **Identifier** textbox, type a URL using one of the following pattern:
+ a. In the **Identifier** textbox, type a URL using one of the following patterns:
| Identifier | |-| | `https://<Fully Qualified Domain Name>.qlikpoc.com` | | `https://<Fully Qualified Domain Name>.qliksense.com` |
- |
-
- c. In the **Reply URL** textbox, type a URL using the following pattern:
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
`https://<Fully Qualified Domain Name>:443{/virtualproxyprefix}/samlauthn/`
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<Fully Qualified Domain Name>:443{/virtualproxyprefix}/hub`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier, and Reply URL, Which are explained later in this tutorial or contact [Qlik Sense Enterprise Client support team](https://www.qlik.com/us/services/support) to get these values. The default port for the URLs is 443 but you can customize it per your Organization need.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL which are explained later in this tutorial or contact [Qlik Sense Enterprise Client support team](https://www.qlik.com/us/services/support) to get these values. The default port for the URLs is 443 but you can customize it per your Organization need.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable Britta Simon to use Azure single sign-on by grant
Qlik Sense Enterprise supports **just-in-time provisioning**, Users automatically added to the 'USERS' repository of Qlik Sense Enterprise as they use the SSO feature. In addition, clients can use the QMC and create a UDC (User Directory Connector) for pre-populating users in Qlik Sense Enterprise from their LDAP of choice such as Active Directory, and others.
-### Test SSO
+## Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Slack | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Slack'
description: Learn how to configure single sign-on between Azure Active Directory and Slack.
Previously updated : 12/28/2020 Last updated : 06/06/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Slack
+# Tutorial: Azure AD SSO integration with Slack
In this tutorial, you'll learn how to integrate Slack with Azure Active Directory (Azure AD). When you integrate Slack with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Slack supports **SP** initiated SSO
-* Slack supports **Just In Time** user provisioning
-* Slack supports [**Automated** user provisioning](./slack-provisioning-tutorial.md)
+* Slack supports **SP** initiated SSO.
+* Slack supports **Just In Time** user provisioning.
+* Slack supports [**Automated** user provisioning](./slack-provisioning-tutorial.md).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
3. If you want to setup Slack manually, in a different web browser window, sign in to your Slack company site as an administrator.
-2. Navigate to **Microsoft Azure AD** then go to **Team Settings**.
+2. click on your workspace name in the top left, then go to **Settings & administration** -> **Workspace settings**.
- ![Configure single sign-on On Microsoft Azure AD](./media/slack-tutorial/tutorial-slack-team-settings.png)
+ ![Screenshot of Configure single sign-on On Microsoft Azure AD.](./media/slack-tutorial/tutorial-slack-team-settings.png)
-3. In the **Team Settings** section, click the **Authentication** tab, and then click **Change Settings**.
+3. In the **Settings & permissions** section, click the **Authentication** tab, and then click **Configure** button at SAML authentication method.
- ![Configure single sign-on On Team Settings](./media/slack-tutorial/tutorial-slack-authentication.png)
+ ![Screenshot of Configure single sign-on On Team Settings.](./media/slack-tutorial/tutorial-slack-authentication.png)
-4. On the **SAML Authentication Settings** dialog, perform the following steps:
+4. On the **Configure SAML authentication for Azure** dialog, perform the below steps:
- ![Configure single sign-on On SAML Authentication Settings](./media/slack-tutorial/tutorial-slack-save-authentication.png)
+ ![Screenshot of Configure single sign-on On SAML Authentication Settings.](./media/slack-tutorial/tutorial-slack-save-authentication.png)
- a. In the **SAML 2.0 Endpoint (HTTP)** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
+ a. In the top right, toggle **Test** mode on.
+
+ b. In the **SAML SSO URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
+
+ c. In the **Identity provider issuer** textbox, paste the value of **Azure Ad Identifier**, which you have copied from Azure portal.
+
+ d. Open your downloaded certificate file in Notepad, copy the content of it into your clipboard, and then paste it to the **Public Certificate** textbox.
+
+1. Expand the **Advanced options** and perform the below steps:
+
+ ![Screenshot of Configure Advanced options single sign-on On App Side.](./media/slack-tutorial/advanced-settings.png)
- b. In the **Identity Provider Issuer** textbox, paste the value of **Azure Ad Identifier**, which you have copied from Azure portal.
+ a. If you need an end-to-end encryption key, tick the box **Sign AuthnRequest** to show the certificate.
- c. Open your downloaded certificate file in Notepad, copy the content of it into your clipboard, and then paste it to the **Public Certificate** textbox.
+ b. Enter `https://slack.com` in the **Service provider issuer** textbox.
- d. Configure the above three settings as appropriate for your Slack team. For more information about the settings, please find the **Slack's SSO configuration guide** here. `https://get.slack.help/hc/articles/220403548-Guide-to-single-sign-on-with-Slack%60`
+ c. Choose how the SAML response from your IDP is signed from the two options.
- ![Configure single sign-on On App Side](./media/slack-tutorial/tutorial-slack-expand.png)
+1. Under **Settings**, decide if members can edit their profile information (like their email or display name) after SSO is enabled. You can also choose whether SSO is required, partially required or optional.
- e. Click on **expand** and enter `https://slack.com` in the **Service provider issuer** textbox.
+ ![Screenshot of Configure Save configuration single sign-on On App Side.](./media/slack-tutorial/save-configuration-button.png)
- f. Click **Save Configuration**.
+1. Click **Save Configuration**.
> [!NOTE] > If you have more than one Slack instance that you need to integrate with Azure AD, set `https://<DOMAIN NAME>.slack.com` to **Service provider issuer** so that it can pair with the Azure application **Identifier** setting.
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additi
### Enable Active-Active gateways for redundancy
-In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premise VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
+In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](../vpn-gateway/vpn-gateway-highlyavailable.md).
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
For more information on configuring your load balancer in a different subnet, se
You must have the following resource installed: * The Azure CLI
-* The `aks-preview` extension version 0.5.50 or later
* Kubernetes version 1.22.x or above
-#### Install the aks-preview CLI extension
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
- ### Create a Private Link service connection To attach an Azure Private Link service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the example below. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/) design document
Learn more about Kubernetes services at the [Kubernetes services documentation][
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [install-azure-cli]: /cli/azure/install-azure-cli [aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources
-[different-subnet]: #specify-a-different-subnet
+[different-subnet]: #specify-a-different-subnet
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
Learn more about how KEDA works in the [official KEDA documentation][keda-archit
## Installation and version
-KEDA can be added to your Azure Kubernetes Service (AKS) cluster by enabling the KEDA add-on using an [ARM template][keda-arm].
+KEDA can be added to your Azure Kubernetes Service (AKS) cluster by enabling the KEDA add-on using an [ARM template][keda-arm] or [Azure CLI][keda-cli].
The KEDA add-on provides a fully supported installation of KEDA that is integrated with AKS.
For general KEDA questions, we recommend [visiting the FAQ overview][keda-faq].
## Next steps * [Enable the KEDA add-on with an ARM template][keda-arm]
+* [Enable the KEDA add-on with the Azure CLI][keda-cli]
+* [Troubleshoot KEDA add-on problems][keda-troubleshoot]
* [Autoscale a .NET Core worker processing Azure Service Bus Queue messages][keda-sample] <!-- LINKS - internal --> [keda-azure-cli]: keda-deploy-addon-az-cli.md
+[keda-cli]: keda-deploy-add-on-cli.md
[keda-arm]: keda-deploy-add-on-arm.md
+[keda-troubleshoot]: keda-troubleshoot.md
<!-- LINKS - external --> [keda]: https://keda.sh/
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KE
- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli).
+- Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])
### Register the `AKS-KedaPreview` feature flag
To remove the resource group, and all related resources, use the [Az PowerShell
az group delete --name MyResourceGroup ```
-### Enabling add-on on clusters with self-managed open-source KEDA installations
-
-While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
-
-When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
-
-This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
-
-While there's a possibility that the existing autoscaling will keep on working, there's a risk given it will be configured differently and won't support features such as managed identity.
-
-It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
-
-Following error will be thrown in the operator logs but the installation of KEDA add-on will be completed.
-
-Error logged in now-suppressed non-participating KEDA operator pod:
-the error logged inside the already installed KEDA operator logs.
-E0520 11:51:24.868081 1 leaderelection.go:330] error retrieving resource lock default/operator.keda.sh: config maps "operator.keda.sh" is forbidden: User "system:serviceaccount:default:keda-operator" can't get resource "config maps" in API group "" in the namespace "default"
- ## Next steps
-This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps
+This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
+
+You can troubleshoot troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
<!-- LINKS - internal --> [az-aks-create]: /cli/azure/aks#az-aks-create
This article showed you how to install the KEDA add-on on an AKS cluster, and th
[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials [az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete
+[keda-troubleshoot]: keda-troubleshoot.md
+[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
<!-- LINKS - external --> [kubectl]: https://kubernetes.io/docs/user-guide/kubectl
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
This article shows you how to install the Kubernetes Event-driven Autoscaling (K
- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli).
+- Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])
### Install the extension `aks-preview` Install the `aks-preview` extension in the AKS cluster to make sure you have the latest version of AKS extension before installing KEDA add-on. ```azurecli-- az extension add --upgrade --name aks-preview
+az extension add --upgrade --name aks-preview
``` ### Register the `AKS-KedaPreview` feature flag
az aks update \
--disable-keda ```
-### Enabling add-on on clusters with self-managed open-source KEDA installations
-
-While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
-
-When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
-
-This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
-
-While there's a possibility that the existing autoscaling will keep on working, there's a risk given it will be configured differently and won't support features such as managed identity.
-
-It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
-
-Following error will be thrown in the operator logs but the installation of KEDA add-on will be completed.
-
-Error logged in now-suppressed non-participating KEDA operator pod:
-the error logged inside the already installed KEDA operator logs.
-E0520 11:51:24.868081 1 leaderelection.go:330] error retrieving resource lock default/operator.keda.sh: config maps "operator.keda.sh" is forbidden: User "system:serviceaccount:default:keda-operator" can't get resource "config maps" in API group "" in the namespace "default"
- ## Next steps This article showed you how to install the KEDA add-on on an AKS cluster using Azure CLI. The steps to verify that KEDA add-on is installed and running are included. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
+You can troubleshoot troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
+ [az-aks-create]: /cli/azure/aks#az-aks-create [az aks install-cli]: /cli/azure/aks#az-aks-install-cli [az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials [az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete
+[keda-troubleshoot]: keda-troubleshoot.md
+[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
[kubectl]: https://kubernetes.io/docs/user-guide/kubectl [keda]: https://keda.sh/
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
However, these external scalers aren't supported as part of the add-on and rely
## Next steps * [Enable the KEDA add-on with an ARM template][keda-arm]
+* [Enable the KEDA add-on with the Azure CLI][keda-cli]
+* [Troubleshoot KEDA add-on problems][keda-troubleshoot]
* [Autoscale a .NET Core worker processing Azure Service Bus Queue message][keda-sample] <!-- LINKS - internal --> [aks-support-policy]: support-policies.md
+[keda-cli]: keda-deploy-add-on-cli.md
[keda-arm]: keda-deploy-add-on-arm.md
+[keda-troubleshoot]: keda-troubleshoot.md
<!-- LINKS - external --> [keda-scalers]: https://keda.sh/docs/latest/scalers/
aks Keda Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-troubleshoot.md
+
+ Title: Troubleshooting Kubernetes Event-driven Autoscaling (KEDA) add-on
+description: How to troubleshoot Kubernetes Event-driven Autoscaling add-on
++ Last updated : 8/26/2021+++
+# Kubernetes Event-driven Autoscaling (KEDA) AKS add-on Troubleshooting Guides
+
+When you deploy the KEDA AKS add-on, you could possibly experience problems associated with configuration of the application autoscaler.
+
+The following guide will assist you on how to troubleshoot errors and resolve common problems with the add-on, in addition to the official KEDA [FAQ][keda-faq] & [troubleshooting guide][keda-troubleshooting].
+
+## Verifying and Troubleshooting KEDA components
+
+### Check available KEDA version
+
+You can check the available KEDA version by using the `kubectl` command:
+
+```azurecli-interactive
+kubectl get crd/scaledobjects.keda.sh -o custom-columns='APP:.metadata.labels.app\.kubernetes\.io/version'
+```
+
+An overview will be provided with the installed KEDA version:
+
+```Output
+APP
+2.7.0
+```
+
+### Ensuring the cluster firewall is configured correctly
+
+It might happen that KEDA isn't scaling applications because it can't start up.
+
+When checking the operator logs, you might find errors similar to the following:
+
+```output
+1.6545953013458195e+09 ERROR Failed to get API Group-Resources {"error": "Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"}
+sigs.k8s.io/controller-runtime/pkg/cluster.New
+/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/cluster/cluster.go:160
+sigs.k8s.io/controller-runtime/pkg/manager.New
+/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/manager/manager.go:313
+main.main
+/workspace/main.go:87
+runtime.main
+/usr/local/go/src/runtime/proc.go:255
+1.6545953013459463e+09 ERROR setup unable to start manager {"error": "Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"}
+main.main
+/workspace/main.go:97
+runtime.main
+/usr/local/go/src/runtime/proc.go:255
+```
+
+While in the metric server you might notice that it's not able to start up:
+
+```output
+I0607 09:53:05.297924 1 main.go:147] keda_metrics_adapter "msg"="KEDA Version: 2.7.1"
+I0607 09:53:05.297979 1 main.go:148] keda_metrics_adapter "msg"="KEDA Commit: "
+I0607 09:53:05.297996 1 main.go:149] keda_metrics_adapter "msg"="Go Version: go1.17.9"
+I0607 09:53:05.298006 1 main.go:150] keda_metrics_adapter "msg"="Go OS/Arch: linux/amd64"
+E0607 09:53:15.344324 1 logr.go:279] keda_metrics_adapter "msg"="Failed to get API Group-Resources" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
+E0607 09:53:15.344360 1 main.go:104] keda_metrics_adapter "msg"="failed to setup manager" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
+E0607 09:53:15.344378 1 main.go:209] keda_metrics_adapter "msg"="making provider" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
+E0607 09:53:15.344399 1 main.go:168] keda_metrics_adapter "msg"="unable to run external metrics adapter" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
+```
+
+This most likely means that the KEDA add-on isn't able to start up due to a misconfigured firewall.
+
+In order to make sure it runs correctly, make sure to configure the firewall to meet [the requirements][aks-firewall-requirements].
+
+### Enabling add-on on clusters with self-managed open-source KEDA installations
+
+While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
+
+When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
+
+This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
+
+While there's a possibility that the existing autoscaling will keep on working, it introduces a risk given it will be configured differently and won't support features such as managed identity.
+
+It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
+
+In order to determine which metrics adapter is being used by KEDA, use the `kubectl` command:
+
+```azurecli-interactive
+kubectl get APIService/v1beta1.external.metrics.k8s.io -o custom-columns='NAME:.spec.service.name,NAMESPACE:.spec.service.namespace'
+```
+
+An overview will be provided showing the service and namespace that Kubernetes will use to get metrics:
+
+```Output
+NAME NAMESPACE
+keda-operator-metrics-apiserver kube-system
+```
+
+> [!WARNING]
+> If the namespace is not `kube-system`, then the AKS add-on is being ignored and another metric server is being used.
+
+[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
+[keda-troubleshooting]: https://keda.sh/docs/latest/troubleshooting/
+[keda-faq]: https://keda.sh/docs/latest/faq/
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
az aks snapshot create --name MySnapshot --resource-group MyResourceGroup --node
First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below: ```azurecli-interactive
-SNAPSHOT_ID=$(az aks snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
+SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
``` Now, we can use the command below to add a new node pool based off of this snapshot.
You can upgrade a node pool to a snapshot configuration so long as the snapshot
First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below: ```azurecli-interactive
-SNAPSHOT_ID=$(az aks snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
+SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
``` Now, we can use this command to upgrade this node pool to this snapshot configuration.
When you create a cluster from a snapshot, the cluster original system pool will
First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below: ```azurecli-interactive
-SNAPSHOT_ID=$(az aks snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
+SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
``` Now, we can use this command to create this cluster off of the snapshot configuration.
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
The `retry` policy executes its child policies once and then retries their execu
### Example
-In the following example, request forwarding is retried up to ten times using an exponential retry algorithm. Since `first-fast-retry` is set to false, all retry attempts are subject to the exponential retry algorithm.
+In the following example, request forwarding is retried up to ten times using an exponential retry algorithm. Since `first-fast-retry` is set to false, all retry attempts are subject to exponentially increasing retry wait times (in this example, approximately 10 seconds, 20 seconds, 40 seconds, ...), up to a maximum wait of `max-interval`.
```xml
In the following example, sending a request to a URL other than the defined back
| delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. | No | N/A | | first-fast-retry | If set to `true` , the first retry attempt is performed immediately. | No | `false` |
-> [!NOTE]
-> When only the `interval` is specified, **fixed** interval retries are performed.
-> When only the `interval` and `delta` are specified, a **linear** interval retry algorithm is used, where wait time between retries is calculated according the following formula - `interval + (count - 1)*delta`.
-> When the `interval`, `max-interval` and `delta` are specified, **exponential** interval retry algorithm is applied, where the wait time between the retries is growing exponentially from the value of `interval` to the value `max-interval` according to the following formula - `min(interval + (2^count - 1) * random(delta * 0.8, delta * 1.2), max-interval)`.
+#### Retry wait times
+
+* When only the `interval` is specified, **fixed** interval retries are performed.
+* When only the `interval` and `delta` are specified, a **linear** interval retry algorithm is used. The wait time between retries increases according to the following formula: `interval + (count - 1)*delta`.
+* When the `interval`, `max-interval` and `delta` are specified, an **exponential** interval retry algorithm is applied. The wait time between the retries increases exponentially according to the following formula: `interval + (2^count - 1) * random(delta * 0.8, delta * 1.2)`, up to a maximum interval set by `max-interval`.
+
+ For example, when `interval` and `delta` are both set to 10 seconds, and `max-interval` is 100 seconds, the approximate wait time between retries increases as follows: 10 seconds, 20 seconds, 40 seconds, 80 seconds, with 100 seconds wait time used for remaining retries.
### Usage
app-service App Service Asp Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-asp-net-migration.md
# .NET migration cases for Azure App Service
-Azure App Service provides easy-to-use tools to quickly discover on-premise .NET web apps, assess for readiness, and migrate both the content & supported configurations to App Service.
+Azure App Service provides easy-to-use tools to quickly discover on-premises .NET web apps, assess for readiness, and migrate both the content & supported configurations to App Service.
These tools are developed to support different kinds of scenarios, focused on discovery, assessment, and migration. Following is list of .NET migration tools and use cases.
The [app containerization tool](https://azure.microsoft.com/blog/accelerate-appl
## Next steps
-[Migrate an on-premise web application to Azure App Service](/learn/modules/migrate-app-service-migration-assistant/)
+[Migrate an on-premises web application to Azure App Service](/learn/modules/migrate-app-service-migration-assistant/)
app-service App Service Java Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-java-migration.md
# Java migration resources for Azure App Service
-Azure App Service provides tools to discover web apps deployed to on-premise web servers. You can assess these apps for readiness, then migrate them to App Service. Both the web app content and supported configuration can be migrated to App Service. These tools are developed to support a wide variety of scenarios focused on discovery, assessment, and migration.
+Azure App Service provides tools to discover web apps deployed to on-premises web servers. You can assess these apps for readiness, then migrate them to App Service. Both the web app content and supported configuration can be migrated to App Service. These tools are developed to support a wide variety of scenarios focused on discovery, assessment, and migration.
## Java Tomcat migration (Linux)
-[Download the assistant](https://azure.microsoft.com/services/app-service/migration-assistant/) to migrate a Java app running on Apache Tomcat web server. You can also use Azure Container Registry to migrate on-premise Linux Docker containers to App Service.
+[Download the assistant](https://azure.microsoft.com/services/app-service/migration-assistant/) to migrate a Java app running on Apache Tomcat web server. You can also use Azure Container Registry to migrate on-premises Linux Docker containers to App Service.
| Resources | |--|
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
> > Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. For more information, See [**Request approval to run container**](form-recognizer-container-install-run.md#request-approval-to-run-the-container).
-With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premise and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
+With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premises and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
## Configuration settings
container_name: azure-cognitive-service-receipt image: cognitiveservicespreview.azurecr.io/microsoft/cognitive-services-form-recognizer-receipt:2.1 environment:
- - EULA=accept
+ - EULA=accept
- billing={FORM_RECOGNIZER_ENDPOINT_URI} - key={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
container_name: azure-cognitive-service-read image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2 environment:
- - EULA=accept
+ - EULA=accept
- billing={COMPUTER_VISION_ENDPOINT_URI} - key={COMPUTER_VISION_KEY} networks:
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
The following host machine requirements are applicable to **train and analyze**
| Custom API| 0.5 cores, 0.5-GB memory| 1 cores, 1-GB memory | |Custom Supervised | 4 cores, 2-GB memory | 8 cores, 4-GB memory|
-If you're only making analyze calls, the host machine requirements are as follows:
-
-| Container | Minimum | Recommended |
-|--||-|
-|Custom Supervised (Analyze) | 1 core, 0.5-GB | 2 cores, 1-GB memory |
- * Each core must be at least 2.6 gigahertz (GHz) or faster. * Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker compose` or `docker run` command.
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
After you've called the [**Analyze document**](https://westus.dev.cognitive.micr
#### GET request ```bash
-<<<<<<< HEAD
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
-=======
curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
->>>>>>> resolve-merge-conflict
``` #### Examine the response
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
Title: "Azure Arc-enabled data services validation"
Previously updated : 09/30/2021 Last updated : 06/14/2022
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Dell EMC PowerFlex |1.19.7|v1.0.0_2021-07-30|15.0.2148.140 | Not validated |
-| PowerFlex version 3.6 |1.19.7|v1.0.0_2021-07-30|15.0.2148.140 | Not validated |
-| PowerFlex CSI version 1.4 |1.19.7|v1.0.0_2021-07-30|15.0.2148.140 | Not validated |
+| Dell EMC PowerFlex |1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
+| PowerFlex version 3.6 |1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
+| PowerFlex CSI version 1.4 |1.21.5|v1.4.1_2022-03-08 | Not validated |
| PowerStore X|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1) |
-| Powerstore T|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1)|
+| PowerStore T|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1)|
+
+### HPE
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+|HPE|1.20.0|v1.6.0_2022-05-02|16.0.41.7337|12.3 (Ubuntu 12.3-1)
+
+### Kublr
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+|Kublr |1.22.0 / 1.20.12 |v1.1.0_2021-11-02 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
+
+### Lenovo
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2|v1.0.0_2021-07-30 |15.0.2148.140|Not validated|
### Nutanix |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV:20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | v1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
+| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | v1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
### Platform 9 |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | v1.0.0_2021-07-30| 15.0.2148.140 | Not validated |
+| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | v1.0.0_2021-07-30| 15.0.2195.191 | PostgreSQL 12.3 (Ubuntu 12.3-1) |
### PureStorage |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Portworx Enterprise 2.7 | 1.20.7 | v1.0.0_2021-07-30 | 15.0.2148.140 | Not validated |
+| Portworx Enterprise 2.7 1.22.5 | 1.20.7 | v1.1.0_2021-11-02 | 15.0.2148.140 | Not validated |
### Red Hat
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| TKGm v1.3.1 | 1.20.5 | v1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
+| TKGm v1.5.1 | 1.20.5 | v1.4.1_2022-03-08 | 15.0.2255.119|postgres 12.3 (Ubuntu 12.3-1)|
+
+### WindRiver
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+|WindRiver| 1.18.1|v1.1.0_2021-11-02 |15.0.2195.191|postgres 12.3 (Ubuntu 12.3-1) |
## Data services validation process
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
keywords: "Kubernetes, Arc, Azure, containers"
# What is Azure Arc-enabled Kubernetes?
-Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premise data center (such as VMware vSphere or Azure Stack HCI) to Azure Arc.
+Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premises data center (such as VMware vSphere or Azure Stack HCI) to Azure Arc.
When you connect a Kubernetes cluster to Azure Arc, it will:
Azure Arc-enabled Kubernetes supports the following scenarios for connected clus
* [Connect Kubernetes](quickstart-connect-cluster.md) running outside of Azure for inventory, grouping, and tagging.
-* Deploy applications and apply configuration using [GitOps-based configuration management](tutorial-use-gitops-connected-cluster.md).
+* Deploy applications and apply configuration using [GitOps-based configuration management](tutorial-use-gitops-connected-cluster.md).
* View and monitor your clusters using [Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json).
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 06/07/2022 Last updated : 06/13/2022 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
spec:
app.kubernetes.io/name: flux-extension ```
+### Flux v2 - `microsoft.flux` extension installation CPU and memory limits
+
+The controllers installed in your Kubernetes cluster with the Microsoft.Flux extension require the following CPU and memory resource limits to properly schedule on Kubernetes cluster nodes.
+
+| Container Name | CPU limit | Memory limit |
+| -- | -- | -- |
+| fluxconfig-agent | 50m | 150Mi |
+| fluxconfig-controller | 100m | 150Mi |
+| fluent-bit | 20m | 150Mi |
+| helm-controller | 1000m | 1Gi |
+| source-controller | 1000m | 1Gi |
+| kustomize-controller | 1000m | 1Gi |
+| notification-controller | 1000m | 1Gi |
+| image-automation-controller | 1000m | 1Gi |
+| image-reflector-controller | 1000m | 1Gi |
+
+If you have enabled a custom or built-in Azure Gatekeeper Policy, such as `Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits`, that limits the resources for containers on Kubernetes clusters, you will need to either ensure that the resource limits on the policy are greater than the limits shown above or the `flux-system` namespace is part of the `excludedNamespaces` parameter in the policy assignment.
++ ## Monitoring Azure Monitor for Containers requires its DaemonSet to run in privileged mode. To successfully set up a Canonical Charmed Kubernetes cluster for monitoring, run the following command:
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
You can learn more about the technical details of the Netherite storage provider
## <a name="mssql"></a>Microsoft SQL Server (MSSQL) (preview)
-The Microsoft SQL Server (MSSQL) storage provider persists all state into a Microsoft SQL Server database. It's compatible with both on-premise and cloud-hosted deployments of SQL Server, including [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
+The Microsoft SQL Server (MSSQL) storage provider persists all state into a Microsoft SQL Server database. It's compatible with both on-premises and cloud-hosted deployments of SQL Server, including [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
The key benefits of the MSSQL storage provider include:
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Connection string for storage account where the function app code and configurat
||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-This setting is used for Consumption and Premium plan apps on both Windows and Linux. It's not used for Dedicated plan apps, which aren't dynamically scaled by Functions.
+This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
The file path to the function app code and configuration in an event-driven scal
||| |WEBSITE_CONTENTSHARE|`functionapp091999e2`|
-This setting is used for Consumption and Premium plan apps on both Windows and Linux. It's not used for Dedicated plan apps, which aren't dynamically scaled by Functions.
+This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
azure-functions Functions Identity Access Azure Sql With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-access-azure-sql-with-managed-identity.md
description: Learn how to connect Azure SQL bindings through managed identity. Previously updated : 1/28/2022 Last updated : 6/13/2022
Enabling Azure AD authentication can be completed via the Azure portal, PowerShe
1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az-ad-user-list) and replace *\<user-principal-name>*. The result is saved to a variable.
+ For Azure CLI 2.37.0 and newer:
+
+ ```azurecli-interactive
+ azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].id --output tsv)
+ ```
+
+ For older versions of Azure CLI:
+ ```azurecli-interactive azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv) ```
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
# Azure Monitor agent overview
-The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of [supported infrastucture](#supported-resource-types) and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
+The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of [supported infrastucture](#supported-resource-types) and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs) ## Relationship to other agents Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor. - [Log Analytics agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports VM insights and monitoring solutions.-- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).
+- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).
- [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage. **Currently**, the Azure Monitor agent consolidates features from the Telegraf agent and Log Analytics agent, with [a few limitations](#current-limitations).
-In future, it will also consolidate features from the Diagnostic extensions.
+In future, it will also consolidate features from the Diagnostic extensions.
In addition to consolidating this functionality into a single agent, the Azure Monitor agent provides the following benefits over the existing agents: -- **Cost savings:**
+- **Cost savings:**
- Granular targeting via [Data Collection Rules](../essentials/data-collection-rule-overview.md) to collect specific data types from specific machines, as compared to the "all or nothing" mode that Log Analytics agent supports - Use XPath queries to filter Windows events that get collected. This helps further reduce ingestion and storage costs. - **Simplified management of data collection:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](#data-sources-and-destinations). Additionally, every action across the data collection lifecycle, from onboarding to deployment to updates, is significantly easier, scalable, and centralized (in Azure) using data collection rules
The Azure Monitor agent uses [data collection rules](../essentials/data-collecti
## Should I switch to the Azure Monitor agent? To start transitioning your VMs off the current agents to the new agent, consider the following factors: -- **Environment requirements:** The Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will only be provided in this new agent. If the Azure Monitor agent supports your current environment, start transitioning to it.
+- **Environment requirements:** The Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will only be provided in this new agent. If the Azure Monitor agent supports your current environment, start transitioning to it.
-- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality. View [current limitations](#current-limitations) and [supported solutions](#supported-services-and-features).
+- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality. View [current limitations](#current-limitations) and [supported solutions](#supported-services-and-features).
+
+ That said, most new capabilities in Azure Monitor will be made available only with the Azure Monitor agent. Review whether the Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
- That said, most new capabilities in Azure Monitor will be made available only with the Azure Monitor agent. Review whether the Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
-
If the Azure Monitor agent has all the core capabilities you require, start transitioning to it. If there are critical features that you require, continue with the current agent until the Azure Monitor agent reaches parity.-- **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
-
+- **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
+ Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported until the retirement date. ## Coexistence with other agents The Azure Monitor agent can coexist (run side by side on the same machine) with the legacy Log Analytics agents so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin transition given the limitations, you must review the below points carefully: - Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data. As such, ensure you're not collecting the same data from both agents. If you are, ensure they're **collecting from different machines** or **going to separate destinations**. - Besides data duplication, this would also generate more charges for data ingestion and retention.-- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth.
+- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth.
> [!NOTE] > When using both agents during evaluation or migration, you can use the **'Category'** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for 'Azure Monitor Agent'.
The Azure Monitor agent can coexist (run side by side on the same machine) with
| Resource type | Installation method | Additional information | |:|:|:| | Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
-| On-premise servers (Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing [Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent using Azure extension framework, provided for on-premise by first installing [Arc agent](../../azure-arc/servers/deployment-options.md) |
+| On-premises servers (Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing [Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent using Azure extension framework, provided for on-premises by first installing [Arc agent](../../azure-arc/servers/deployment-options.md) |
| Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer | | Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption |
The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log A
| Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system | | Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine. |
-<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.
+<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.
<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including **Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format)**. ## Supported services and features
-The following table shows the current support for the Azure Monitor agent with other Azure services.
+The following table shows the current support for the Azure Monitor agent with other Azure services.
| Azure service | Current support | More information | |:|:|:|
The Azure Monitor agent supports Azure service tags (both *AzureMonitor* and *Az
### Firewall requirements | Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection| |||||--|--|
-| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
-| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure Government |global.handler.control.monitor.azure.us |Access control service|Port 443 |Outbound|Yes |
-| Azure Government |`<virtual-machine-region-name>`.handler.control.monitor.azure.us |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure Government |`<log-analytics-workspace-id>`.ods.opinsights.azure.us |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure China |global.handler.control.monitor.azure.cn |Access control service|Port 443 |Outbound|Yes |
-| Azure China |`<virtual-machine-region-name>`.handler.control.monitor.azure.cn |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure China |`<log-analytics-workspace-id>`.ods.opinsights.azure.cn |Ingest logs data |Port 443 |Outbound|Yes |
+| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
+| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
+| Azure Government |global.handler.control.monitor.azure.us |Access control service|Port 443 |Outbound|Yes |
+| Azure Government |`<virtual-machine-region-name>`.handler.control.monitor.azure.us |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+| Azure Government |`<log-analytics-workspace-id>`.ods.opinsights.azure.us |Ingest logs data |Port 443 |Outbound|Yes |
+| Azure China |global.handler.control.monitor.azure.cn |Access control service|Port 443 |Outbound|Yes |
+| Azure China |`<virtual-machine-region-name>`.handler.control.monitor.azure.cn |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+| Azure China |`<log-analytics-workspace-id>`.ods.opinsights.azure.cn |Ingest logs data |Port 443 |Outbound|Yes |
If using private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
### Log Analytics gateway configuration
-1. Follow the instructions above to configure proxy settings on the agent and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
-2. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
- `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`
- (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-3. Add the **data ingestion endpoint URL** to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`
-3. Restart the **OMS Gateway** service to apply the changes
- `Stop-Service -Name <gateway-name>`
- `Start-Service -Name <gateway-name>`
+1. Follow the instructions above to configure proxy settings on the agent and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
+2. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
+ `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`
+ (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
+3. Add the **data ingestion endpoint URL** to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`
+3. Restart the **OMS Gateway** service to apply the changes
+ `Stop-Service -Name <gateway-name>`
+ `Start-Service -Name <gateway-name>`
### Private link configuration
-To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
+To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
## Next steps
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
With the new client installer available in this preview, you can now collect tel
Both the [generally available extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) and this installer use Data Collection rules to configure the **same underlying agent**. ### Comparison with virtual machine extension
-Here is a comparison between client installer and VM extension for Azure Monitor agent. It also highlights which parts are in preview:
+Here is a comparison between client installer and VM extension for Azure Monitor agent. It also highlights which parts are in preview:
-| Functional component | For VMs/servers via extension | For clients via installer|
+| Functional component | For VMs/servers via extension | For clients via installer|
|:|:|:| | Agent installation method | Via VM extension | Via client installer <sup>preview</sup> | | Agent installed | Azure Monitor Agent | Same |
Here is a comparison between client installer and VM extension for Azure Monitor
| Associating config rules to agents | DCRs associates directly to individual VM resources | DCRs associate to Monitored Object (MO), which maps to all devices within the AAD tenant <sup>preview</sup> | | Data upload to Log Analytics | Via Log Analytics endpoints | Same | | Feature support | All features documented [here](./azure-monitor-agent-overview.md) | Features dependent on AMA agent extension that don't require additional extensions. This includes support for Sentinel Windows Event filtering |
-| [Networking options](./azure-monitor-agent-overview.md#networking) | Proxy support, Private link support | Proxy support only |
+| [Networking options](./azure-monitor-agent-overview.md#networking) | Proxy support, Private link support | Proxy support only |
Here is a comparison between client installer and VM extension for Azure Monitor
| Windows 10, 11 desktops, workstations | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer | | Windows 10, 11 laptops | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption | | Virtual machines, scale sets | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
-| On-premise servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premise by installing Arc agent |
+| On-premise servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premises by installing Arc agent |
## Prerequisites
Here is a comparison between client installer and VM extension for Azure Monitor
5. The device must have access to the following HTTPS endpoints: - global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
- - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
+ - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
(If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)) 6. Existing data collection rule(s) you wish to associate with the devices. If it doesn't exist already, [follow the guidance here to create data collection rule(s)](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi). **Do not associate the rule to any resources yet**.
-
-## Install the agent
+
+## Install the agent
1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below): [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox) 2. Open an elevated admin command prompt window and update path to the location where you downloaded the installer.
-3. To install with **default settings**, run the following command:
+3. To install with **default settings**, run the following command:
```cli msiexec /i AzureMonitorAgentClientSetup.msi /qn ```
Here is a comparison between client installer and VM extension for Azure Monitor
msiexec /i AzureMonitorAgentClientSetup.msi /qn DATASTOREDIR="C:\example\folder" ```
- | Parameter | Description |
+ | Parameter | Description |
|:|:| | INSTALLDIR | Directory path where the agent binaries are installed | | DATASTOREDIR | Directory path where the agent stores its operational logs and data |
Here is a comparison between client installer and VM extension for Azure Monitor
| PROXYADDRESS | Set to Proxy Address. PROXYUSE must be set to "true" to be correctly applied | | PROXYUSEAUTH | Set to "true" if proxy requires authentication | | PROXYUSERNAME | Set to Proxy username. PROXYUSE and PROXYUSEAUTH must be set to "true" |
- | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+ | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
5. Verify successful installation:
- - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
- - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
+ - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
+ - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
6. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating. > [!NOTE]
-> The agent installed with the client installer currently doesn't support updating configuration once it is installed. Uninstall and reinstall AMA to update its configuration.
+> The agent installed with the client installer currently doesn't support updating configuration once it is installed. Uninstall and reinstall AMA to update its configuration.
## Create and associate a 'Monitored Object'
Then, proceed with the instructions below to create and associate them to a Moni
#### 1. Assign ΓÇÿMonitored Object ContributorΓÇÖ role to the operator
-This step grants the ability to create and link a monitored object to a user.
+This step grants the ability to create and link a monitored object to a user.
**Permissions required:** Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin may be needed to perform this step. [Follow these steps to elevate Azure AD Tenant Admin as Azure Tenant Admin](../../role-based-access-control/elevate-access-global-admin.md). It will give the Azure AD admin 'owner' permissions at the root scope. **Request URI**
PUT https://management.azure.com/providers/microsoft.insights/providers/microsof
| Name | In | Type | Description | |:|:|:|:|:|
-| `roleAssignmentGUID` | path | string | Provide any valid guid (you can generate one using https://guidgenerator.com/) |
+| `roleAssignmentGUID` | path | string | Provide any valid guid (you can generate one using https://guidgenerator.com/) |
**Headers** - Authorization: ARM Bearer Token (using ΓÇÿGet-AzAccessTokenΓÇÖ or other method)
PUT https://management.azure.com/providers/microsoft.insights/providers/microsof
} ```
-**Body parameters**
+**Body parameters**
| Name | Description | |:|:| | roleDefinitionId | Fixed value: Role definition ID of the 'Monitored Objects Contributor' role: `/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b` |
-| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It may be the user who elevated at the beginning of step 1, or another user who will perform later steps. |
+| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It may be the user who elevated at the beginning of step 1, or another user who will perform later steps. |
-After this step is complete, **reauthenticate** your session and **reacquire** your ARM bearer token.
+After this step is complete, **reauthenticate** your session and **reacquire** your ARM bearer token.
#### 2. Create Monitored Object This step creates the Monitored Object for the Azure AD Tenant scope. It will be used to represent client devices that are signed with that Azure AD Tenant identity.
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
| Name | In | Type | Description | |:|:|:|:|:|
-| `AADTenantId` | path | string | ID of the Azure AD tenant that the device(s) belong to. The MO will be created with the same ID |
+| `AADTenantId` | path | string | ID of the Azure AD tenant that the device(s) belong to. The MO will be created with the same ID |
**Headers** - Authorization: ARM Bearer Token
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
#### 3. Associate DCR to Monitored Object
-Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating a Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi) to create data collection rule(s) first.
+Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating a Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi) to create data collection rule(s) first.
**Permissions required**: Anyone who has ΓÇÿMonitored Object ContributorΓÇÖ at an appropriate scope can perform this operation, as assigned in step 1. **Request URI**
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
**Request Body** ```JSON {
- "properties":
+ "properties":
{ "dataCollectionRuleId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}" }
You can use any of the following options to check the installed version of the a
- Open **Control Panel** > **Programs and Features** > **Azure Monitor Agent** and see the 'Version' listed - Open **Settings** > **Apps** > **Apps and Features** > **Azure Monitor Agent** and see the 'Version' listed
-### Uninstall the agent
+### Uninstall the agent
You can use any of the following options to check the installed version of the agent: - Open **Control Panel** > **Programs and Features** > **Azure Monitor Agent** and click 'Uninstall'-- Open **Settings** > **Apps** > **Apps and Features** > **Azure Monitor Agent** and click 'Uninstall'
+- Open **Settings** > **Apps** > **Apps and Features** > **Azure Monitor Agent** and click 'Uninstall'
If you face issues during 'Uninstall', refer to [troubleshooting guidance](#troubleshoot) below
-### Update the agent
+### Update the agent
In order to update the version, install the new version you wish to update to.
In order to update the version, install the new version you wish to update to.
### View agent diagnostic logs 1. Rerun the installation with logging turned on and specify the log file name: `Msiexec /I AzureMonitorAgentClientSetup.msi /L*V <log file name>`
-2. Runtime logs are collected automatically either at the default location `C:\Resources\Azure Monitor Agent\` or at the file path mentioned during installation.
+2. Runtime logs are collected automatically either at the default location `C:\Resources\Azure Monitor Agent\` or at the file path mentioned during installation.
- If you can't locate the path, the exact location can be found on the registry as `AMADataRootDirPath` on `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMonitorAgent`.
-3. The 'ServiceLogs' folder contains log from AMA Windows Service, which launches and manages AMA processes
+3. The 'ServiceLogs' folder contains log from AMA Windows Service, which launches and manages AMA processes
4. 'AzureMonitorAgent.MonitoringDataStore' contains data/logs from AMA processes. ### Common issues #### Missing DLL-- Error message: "There's a problem with this Windows Installer package. A DLL required for this installer to complete could not be run. …" -- Ensure you have installed [C++ Redistributable (>2015)](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) before installing AMA:
+- Error message: "There's a problem with this Windows Installer package. A DLL required for this installer to complete could not be run. …"
+- Ensure you have installed [C++ Redistributable (>2015)](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) before installing AMA:
-#### Silent install from command prompt fails
-Make sure to start the installer on administrator command prompt. Silent install can only be initiated from the administrator command prompt.
+#### Silent install from command prompt fails
+Make sure to start the installer on administrator command prompt. Silent install can only be initiated from the administrator command prompt.
-#### Uninstallation fails due to the uninstaller being unable to stop the service
-- If There's an option to try again, do try it again -- If retry from uninstaller doesn't work, cancel the uninstall and stop Azure Monitor Agent service from Services (Desktop Application) -- Retry uninstall
+#### Uninstallation fails due to the uninstaller being unable to stop the service
+- If There's an option to try again, do try it again
+- If retry from uninstaller doesn't work, cancel the uninstall and stop Azure Monitor Agent service from Services (Desktop Application)
+- Retry uninstall
-#### Force uninstall manually when uninstaller doesn't work
-- Stop Azure Monitor Agent service. Then try uninstalling again. If it fails, then proceed with the following steps -- Delete AMA service with "sc delete AzureMonitorAgent" from admin cmd -- Download [this tool](https://support.microsoft.com/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d) and uninstall AMA -- Delete AMA binaries. They're stored in `Program Files\Azure Monitor Agent` by default -- Delete AMA data/logs. They're stored in `C:\Resources\Azure Monitor Agent` by default -- Open Registry. Check `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure Monitor Agent`. If it exists, delete the key.
+#### Force uninstall manually when uninstaller doesn't work
+- Stop Azure Monitor Agent service. Then try uninstalling again. If it fails, then proceed with the following steps
+- Delete AMA service with "sc delete AzureMonitorAgent" from admin cmd
+- Download [this tool](https://support.microsoft.com/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d) and uninstall AMA
+- Delete AMA binaries. They're stored in `Program Files\Azure Monitor Agent` by default
+- Delete AMA data/logs. They're stored in `C:\Resources\Azure Monitor Agent` by default
+- Open Registry. Check `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure Monitor Agent`. If it exists, delete the key.
-## Questions and feedback
+## Questions and feedback
Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the preview on the [Azure Monitor Agent User Community](https://teams.microsoft.com/l/team/19%3af3f168b782f64561b52abe75e59e83bc%40thread.tacv2/conversations?groupId=770d6aa5-c2f7-4794-98a0-84fd6ae7f193&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Create and manage action groups in the Azure portal
-description: Learn how to create and manage action groups in the Azure portal.
+ Title: Manage action groups in the Azure portal
+description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure Functions.
Previously updated : 6/2/2022 Last updated : 06/06/2022 -+
+ - references_regions
+ - kr2b-contr-experiment
# Create and manage action groups in the Azure portal
-An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor, Service Health and Azure Advisor alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user's requirements.
-This article shows you how to create and manage action groups in the Azure portal.
+When Azure Monitor data indicates that there might be a problem with your infrastructure or application, an alert is triggered. Azure Monitor, Azure Service Health, and Azure Advisor then use *action groups* to notify users about the alert and take an action. An action group is a collection of notification preferences that are defined by the owner of an Azure subscription.
+
+This article shows you how to create and manage action groups in the Azure portal. Depending on your requirements, you can configure various alerts to use the same action group or different action groups.
Each action is made up of the following properties:
-* **Type**: The notification or action performed. Examples include sending a voice call, SMS, email; or triggering various types of automated actions. See types later in this article.
-* **Name**: A unique identifier within the action group.
-* **Details**: The corresponding details that vary by *type*.
+- **Type**: The notification that's sent or action that's performed. Examples include sending a voice call, SMS, or email. You can also trigger various types of automated actions. For detailed information about notification and action types, see [Action-specific information](#action-specific-information), later in this article.
+- **Name**: A unique identifier within the action group.
+- **Details**: The corresponding details that vary by type.
-For information on how to use Azure Resource Manager templates to configure action groups, see [Action group Resource Manager templates](./action-groups-create-resource-manager-template.md).
+For information about how to use Azure Resource Manager templates to configure action groups, see [Action group Resource Manager templates](./action-groups-create-resource-manager-template.md).
-Action Group is **Global** service, therefore there's no dependency on a specific Azure region. Requests from client can be processed by action group service in any region, which means, if one region of service is down, the traffic will be routed and process by other regions automatically. Being a *global service* it helps client not to worry about **disaster recovery**.
+An action group is a **global** service, so there's no dependency on a specific Azure region. Requests from clients can be processed by action group services in any region. For instance, if one region of the action group service is down, the traffic is automatically routed and processed by other regions. As a global service, an action group helps provide a **disaster recovery** solution.
## Create an action group by using the Azure portal
-1. In the [Azure portal](https://portal.azure.com), search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.
+1. Go to the [Azure portal](https://portal.azure.com).
-1. Select **Alerts**, then select **Manage actions**.
+1. Search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.
- ![Manage Actions button](./media/action-groups/manage-action-groups.png)
+1. Select **Alerts**, and then select **Action groups**.
-1. Select **Add action group**, and fill in the relevant fields in the wizard experience.
+ :::image type="content" source="./media/action-groups/manage-action-groups.png" alt-text="Screenshot of the Alerts page in the Azure portal. The Action groups button is called out.":::
- ![The "Add action group" command](./media/action-groups/add-action-group.PNG)
+1. Select **Create**.
-### Configure basic action group settings
+ :::image type="content" source="./media/action-groups/create-action-group.png" alt-text="Screenshot of the Action groups page in the Azure portal. The Create button is called out.":::
-Under **Project details**:
+1. Enter information as explained in the following sections.
-Select the **Subscription** and **Resource group** in which the action group is saved.
+### Configure basic action group settings
-Under **Instance details**:
+1. Under **Project details**, select values for **Subscription** and **Resource group**. The action group is saved in the subscription and resource group that you select.
-1. Enter an **Action group name**.
+1. Under **Instance details**, enter values for **Action group name** and **Display name**. The display name is used in place of a full action group name when the group is used to send notifications.
-1. Enter a **Display name**. The display name is used in place of a full action group name when notifications are sent using this group.
+ :::image type="content" source="./media/action-groups/action-group-1-basics.png" alt-text="Screenshot of the Create action group dialog box. Values are visible in the Subscription, Resource group, Action group name, and Display name boxes.":::
- ![The "Add action group" dialog box](./media/action-groups/action-group-1-basics.png)
+### Configure notifications
+1. To open the **Notifications** tab, select **Next: Notifications**. Alternately, at the top of the page, select the **Notifications** tab.
-### Configure notifications
+1. Define a list of notifications to send when an alert is triggered. Provide the following information for each notification:
-1. Click the **Next: Notifications >** button to move to the **Notifications** tab, or select the **Notifications** tab at the top of the screen.
+ - **Notification type**: Select the type of notification that you want to send. The available options are:
-1. Define a list of notifications to send when an alert is triggered. Provide the following for each notification:
+ - **Email Azure Resource Manager Role**: Send an email to users who are assigned to certain subscription-level Azure Resource Manager roles.
+ - **Email/SMS message/Push/Voice**: Send various notification types to specific recipients.
- a. **Notification type**: Select the type of notification you want to send. The available options are:
- * Email Azure Resource Manager Role - Send an email to users assigned to certain subscription-level ARM roles.
- * Email/SMS/Push/Voice - Send these notification types to specific recipients.
+ - **Name**: Enter a unique name for the notification.
- b. **Name**: Enter a unique name for the notification.
+ - **Details**: Based on the selected notification type, enter an email address, phone number, or other information.
- c. **Details**: Based on the selected notification type, enter an email address, phone number, etc.
+ - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md).
- d. **Common alert schema**: You can choose to enable the [common alert schema](./alerts-common-schema.md), which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor.
+ :::image type="content" source="./media/action-groups/action-group-2-notifications.png" alt-text="Screenshot of the Notifications tab of the Create action group dialog box. Configuration information for an email notification is visible.":::
- ![The Notifications tab](./media/action-groups/action-group-2-notifications.png)
+1. Select OK.
### Configure actions
-1. Click the **Next: Actions >** button to move to the **Actions** tab, or select the **Actions** tab at the top of the screen.
+1. To open the **Actions** tab, select **Next: Actions**. Alternately, at the top of the page, select the **Actions** tab.
+
+1. Define a list of actions to trigger when an alert is triggered. Provide the following information for each action:
-1. Define a list of actions to trigger when an alert is triggered. Provide the following for each action:
+ - **Action type**: Select from the following types of actions:
- a. **Action type**: Select Automation Runbook, Azure Function, ITSM, Logic App, Secure Webhook, Webhook.
+ - An Azure Automation runbook
+ - An Azure Functions function
+ - A notification that's sent to Azure Event Hubs
+ - A notification that's sent to an IT service management (ITSM) tool
+ - An Azure Logic Apps workflow
+ - A secure webhook
+ - A webhook
- b. **Name**: Enter a unique name for the action.
+ - **Name**: Enter a unique name for the action.
- c. **Details**: Based on the action type, enter a webhook URI, Azure app, ITSM connection, or Automation Runbook. For ITSM Action, additionally specify **Work Item** and other fields your ITSM tool requires.
+ - **Details**: Enter appropriate information for your selected action type. For instance, you might enter a webhook URI, the name of an Azure app, an ITSM connection, or an Automation runbook. For an ITSM action, also enter values for **Work item** and other fields that your ITSM tool requires.
- d. **Common alert schema**: You can choose to enable the [common alert schema](./alerts-common-schema.md), which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor.
+ - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md).
- ![The Actions tab](./media/action-groups/action-group-3-actions.png)
+ :::image type="content" source="./media/action-groups/action-group-3-actions.png" alt-text="Screenshot of the Actions tab of the Create action group dialog box. Several options are visible in the Action type list.":::
### Create the action group
-1. You can explore the **Tags** settings if you like. This lets you associate key/value pairs to the action group for your categorization and is a feature available for any Azure resource.
+1. If you'd like to assign a key-value pair to the action group, select **Next: Tags** or the **Tags** tab. Otherwise, skip this step. By using tags, you can categorize your Azure resources. Tags are available for all Azure resources, resource groups, and subscriptions.
- ![The Tags tab](./media/action-groups/action-group-4-tags.png)
+ :::image type="content" source="./media/action-groups/action-group-4-tags.png" alt-text="Screenshot of the Tags tab of the Create action group dialog box. Values are visible in the Name and Value boxes.":::
-1. Click **Review + create** to review the settings. This will do a quick validation of your inputs to make sure all the required fields are selected. If there are issues, they'll be reported here. Once you've reviewed the settings, click **Create** to provision the action group.
+1. To review your settings, select **Review + create**. This step quickly checks your inputs to make sure you've entered all required information. If there are issues, they're reported here. After you've reviewed the settings, select **Create** to create the action group.
- ![The Review + create tab](./media/action-groups/action-group-5-review.png)
+ :::image type="content" source="./media/action-groups/action-group-5-review.png" alt-text="Screenshot of the Review + create tab of the Create action group dialog box. All configured values are visible.":::
> [!NOTE]
-> When you configure an action to notify a person by email or SMS, they receive a confirmation indicating they have been added to the action group.
+>
+> When you configure an action to notify a person by email or SMS, they receive a confirmation indicating that they have been added to the action group.
+
+### Test an action group in the Azure portal (preview)
+
+When you create or update an action group in the Azure portal, you can **test** the action group.
-### Test an action group in the Azure portal (Preview)
+1. Define an action, as described in the previous few sections. Then select **Review + create**.
-When creating or updating an action group in the Azure portal, you can **test** the action group.
-1. After defining an action, click on **Review + create**. Select *Test action group*.
+1. On the page that lists the information that you entered, select **Test action group**.
- ![The Test Action Group](./media/action-groups/test-action-group.png)
+ :::image type="content" source="./media/action-groups/test-action-group.png" alt-text="Screenshot of the Review + create tab of the Create action group dialog box. A Test action group button is visible.":::
-1. Select the *sample type* and select the notification and action types that you want to test and select **Test**.
+1. Select a sample type and the notification and action types that you want to test. Then select **Test**.
- ![Select Sample Type + notification + action type](./media/action-groups/test-sample-action-group.png)
+ :::image type="content" source="./media/action-groups/test-sample-action-group.png" alt-text="Screenshot of the Test sample action group page. An email notification type and a webhook action type are visible.":::
-1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you won't get test results.
+1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you don't get test results.
- ![Stop running test](./media/action-groups/stop-running-test.png)
+ :::image type="content" source="./media/action-groups/stop-running-test.png" alt-text="Screenshot of the Test sample action group page. A dialog box contains a Stop button and asks the user about stopping the test.":::
-1. When the test is complete either a **Success** or **Failed** test status is displayed. If the test failed, you could select *View details* to get more information.
- ![Test sample failed](./media/action-groups/test-sample-failed.png)
+1. When the test is complete, a test status of either **Success** or **Failed** appears. If the test failed and you'd like to get more information, select **View details**.
-You can use the information in the **Error details section**, to understand the issue so that you can edit and test the action group again.
-To allow you to check the action groups are working as expected before you enable them in a production environment, you'll get email and SMS alerts with the subject: Test.
+ :::image type="content" source="./media/action-groups/test-sample-failed.png" alt-text="Screenshot of the Test sample action group page. Error details are visible, and a white X on a red background indicates that a test failed.":::
-All the details and links in Test email notifications for the alerts fired are a sample set for reference.
+You can use the information in the **Error details** section to understand the issue. Then you can edit and test the action group again.
+
+When you run a test and select a notification type, you get a message with "Test" in the subject. The tests provide a way to check that your action group works as expected before you enable it in a production environment. All the details and links in test email notifications are from a sample reference set.
#### Azure Resource Manager role membership requirements
-The following table describes the role membership requirements to use the *test actions* functionality
-| User's role membersip | Existing Action Group | Existing Resource Group and new Action Group | New Resource Group and new Action Group |
-| - | - | -- | - |
-| Subscription Contribuutor | Supported | Supported | Supported |
-| Resource Group Contributor | Supported | Supported | Not Applicable |
-| Action Group resource Contributor | Supported | Not Applicable | Not Applicable |
-| Azure Monitor Contributor | Supported | Supported | Not Applicable |
-| Custom role | Supported | Supported | Not Applicable |
+The following table describes the role membership requirements that are needed for the *test actions* functionality:
+| User's role membership | Existing action group | Existing resource group and new action group | New resource group and new action group |
+| - | - | -- | - |
+| Subscription contributor | Supported | Supported | Supported |
+| Resource group contributor | Supported | Supported | Not applicable |
+| Action group resource contributor | Supported | Not applicable | Not applicable |
+| Azure Monitor contributor | Supported | Supported | Not applicable |
+| Custom role | Supported | Supported | Not applicable |
> [!NOTE]
-> You may perform a limited number of tests over a time period. See the [rate limiting information](./alerts-rate-limiting.md) article.
>
-> You can opt in or opt out to the common alert schema through Action Groups, on the portal. You can [find common schema samples for test action groups for all the sample types](./alerts-common-schema-test-action-definitions.md).
-> You can [find non-common schema alert definitions](./alerts-non-common-schema-definitions.md).
+> You can run a limited number of tests per time period. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+>
+> When you configure an action group in the portal, you can opt in or out of the common alert schema.
+>
+> - To find common schema samples for all sample types, see [Common alert schema definitions for Test Action Group](./alerts-common-schema-test-action-definitions.md).
+> - To find non-common schema alert definitions, see [Non-common alert schema definitions for Test Action Group](./alerts-non-common-schema-definitions.md).
## Manage your action groups
-After you create an action group, you can view **Action groups** by selecting **Manage actions** from the **Alerts** landing page in **Monitor** pane. Select the action group you want to manage to:
+After you create an action group, you can view it in the portal:
-* Add, edit, or remove actions.
-* Delete the action group.
+1. From the **Monitor** page, select **Alerts**.
+1. Select **Manage actions**.
+1. Select the action group that you want to manage. You can:
+
+ - Add, edit, or remove actions.
+ - Delete the action group.
## Action-specific information
+The following sections provide information about the various actions and notifications that you can configure in an action group.
+ > [!NOTE]
-> See [Subscription Service Limits for Monitoring](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-monitor-limits) for numeric limits on each of the items below.
+>
+> To check numeric limits on each type of action or notification, see [Subscription service limits for monitoring](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-monitor-limits).
-### Automation Runbook
-Refer to the [Azure subscription service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) for limits on Runbook payloads.
+### Automation runbook
-You may have a limited number of Runbook actions in an Action Group.
+To check limits on Automation runbook payloads, see [Automation limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits).
-### Azure app Push Notifications
-Enable push notifications to the [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/) by providing the email address you use as your account ID when configuring the Azure mobile app.
+You may have a limited number of runbook actions per action group.
-You may have a limited number of Azure app actions in an Action Group.
+### Azure app push notifications
+
+To enable push notifications to the Azure mobile app, provide the email address that you use as your account ID when you configure the Azure mobile app. For more information about the Azure mobile app, see [Get the Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).
+
+You might have a limited number of Azure app actions per action group.
### Email
-Emails will be sent from the following email addresses. Ensure that your email filtering is configured appropriately
+
+Ensure that your email filtering is configured appropriately. Emails are sent from the following email addresses:
+
- azure-noreply@microsoft.com - azureemail-noreply@microsoft.com - alerts-noreply@mail.windowsazure.com
-You may have a limited number of email actions in an Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article.
+You may have a limited number of email actions per action group. For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
-### Email Azure Resource Manager Role
-Send email to the members of the subscription's role. Email will only be sent to **Azure AD user** members of the role. Email won't be sent to Azure AD groups or service principals.
+### Email Azure Resource Manager role
+
+When you use this type of notification, you can send email to the members of a subscription's role. Email is only sent to Azure Active Directory (Azure AD) **user** members of the role. Email isn't sent to Azure AD groups or service principals.
A notification email is sent only to the *primary email* address.
-If you aren't receiving Notifications on your *primary email*, then you can try following steps:
+If your *primary email* doesn't receive notifications, take the following steps:
+
+1. In the Azure portal, go to **Active Directory**.
+1. On the left, select **All users**. On the right, a list of users appears.
+1. Select the user whose *primary email* you'd like to review.
-1. In Azure portal, go to *Active Directory*.
-2. Click on All users (in left pane), you will see list of users (in right pane).
-3. Select the user for which you want to review the *primary email* information.
+ :::image type="content" source="media/action-groups/active-directory-user-profile.png" alt-text="Screenshot of the All users page in the Azure portal. On the left, the All users item is selected. Information about one user is visible but is indecipherable." border="true":::
- :::image type="content" source="media/action-groups/active-directory-user-profile.png" alt-text="Example of how to review user profile." border="true":::
+1. In the user profile, look under **Contact info** for an **Email** value. If it's blank:
-4. In User profile under Contact Info if "Email" tab is blank then click on *edit* button on the top and add your *primary email* and hit *save* button on the top.
+ 1. At the top of the page, select **Edit**.
+ 1. Enter an email address.
+ 1. At the top of the page, select **Save**.
- :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Example of how to add primary email." border="true":::
+ :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Screenshot of a user profile page in the Azure portal. The Edit button and the Email box are called out." border="true":::
-You may have a limited number of email actions in an Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article.
+You may have a limited number of email actions per action group. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
-While setting up *Email ARM Role*, you need to make sure below three conditions are met:
+When you set up the Azure Resource Manager role:
-1. The type of the entity being assigned to the role needs to be **"User"**.
-2. The assignment needs to be done at the **subscription** level.
-3. The user needs to have an email configured in their **AAD profile**.
+1. Assign an entity of type **"User"** to the role.
+1. Make the assignment at the **subscription** level.
+1. Make sure an email address is configured for the user in their **Azure AD profile**.
> [!NOTE]
-> It can take upto **24 hours** for customer to start receiving notifications after they add new ARM Role to their subscription.
+>
+> It can take up to **24 hours** for a customer to start receiving notifications after they add a new Azure Resource Manager role to their subscription.
+
+### Event Hubs
+
+An Event Hubs action publishes notifications to Event Hubs. For more information about Event Hubs, see [Azure Event HubsΓÇöA big data streaming platform and event ingestion service](../../event-hubs/event-hubs-about.md). You can subscribe to the alert notification stream from your event receiver.
-### Event Hub
-An event hub action publishes notifications to [Azure Event Hubs](~/articles/event-hubs/event-hubs-about.md). You may then subscribe to the alert notification stream from your event receiver.
+### Functions
-### Function
-Calls an existing HTTP trigger endpoint in [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
+An action that uses Functions calls an existing HTTP trigger endpoint in Functions. For more information about Functions, see [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
-When defining the Function action the Function's httptrigger endpoint and access key are saved in the action definition. For example: `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=this_is_access_key`. If you change the access key for the function, you will need to remove and recreate the Function action in the Action Group.
+When you define the function action, the function's HTTP trigger endpoint and access key are saved in the action definition, for example, `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=<access_key>`. If you change the access key for the function, you need to remove and recreate the function action in the action group.
-You may have a limited number of Function actions in an Action Group.
+You may have a limited number of function actions per action group.
### ITSM
-ITSM Action requires an ITSM Connection. Learn how to create an [ITSM Connection](./itsmc-overview.md).
-You may have a limited number of ITSM actions in an Action Group.
+An ITSM action requires an ITSM connection. To learn how to create an ITSM connection, see [ITSM integration](./itsmc-overview.md).
+
+You might have a limited number of ITSM actions per action group.
+
+### Logic Apps
+
+You may have a limited number of Logic Apps actions per action group.
-### Logic App
-You may have a limited number of Logic App actions in an Action Group.
+### Secure webhook
-### Secure Webhook
-The Action Groups Secure Webhook action enables you to take advantage of Azure Active Directory to secure the connection between your action group and your protected web API (webhook endpoint). The overall workflow for taking advantage of this functionality is described below. For an overview of Azure AD Applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md).
+When you use a secure webhook action, you can use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint. For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md). Follow these steps to take advantage of the secure webhook functionality.
> [!NOTE]
-> Using the webhook action requires that the target webhook endpoint be capable of processing the various JSON payloads emitted by different alert sources.
-> If the webhook endpoint is expecting a specific schema (for example Microsoft Teams) you should use the Logic App action to transform the alert schema to meet the target webhook's expectations.
+>
+> If you use the webhook action, your target webhook endpoint needs to be able to process the various JSON payloads that different alert sources emit. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
+
+1. Create an Azure AD application for your protected web API. For detailed information, see [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md). Configure your protected API to be called by a daemon app, and expose application permissions, not delegated permissions. For more information about these permissions, see [If your web API is called by a service or daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-service-or-daemon-app).
-1. Create an Azure AD Application for your protected web API. See [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md).
- - Configure your protected API to be [called by a daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-service-or-daemon-app).
+ > [!NOTE]
+ >
+ > Configure your protected web API to accept V2.0 access tokens. For detailed information about this setting, see [Azure Active Directory app manifest](../../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute).
- > [!NOTE]
- > Your protected web API must be configured to [accept V2.0 access tokens](../../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute).
+1. To enable the action group to use your Azure AD application, use the PowerShell script that follows this procedure.
-2. Enable Action Group to use your Azure AD Application.
+ > [!NOTE]
+ >
+ > You must be assigned the [Azure AD Application Administrator role](../../active-directory/roles/permissions-reference.md#all-roles) to run this script.
- > [!NOTE]
- > You must be a member of the [Azure AD Application Administrator role](../../active-directory/roles/permissions-reference.md#all-roles) to execute this script.
+ 1. Modify the PowerShell script's `Connect-AzureAD` call to use your Azure AD tenant ID.
+ 1. Modify the PowerShell script's `$myAzureADApplicationObjectId` variable to use the Object ID of your Azure AD application.
+ 1. Run the modified script.
- - Modify the PowerShell script's Connect-AzureAD call to use your Azure AD Tenant ID.
- - Modify the PowerShell script's variable $myAzureADApplicationObjectId to use the Object ID of your Azure AD Application.
- - Run the modified script.
+ > [!NOTE]
+ >
+ > The service principle needs to be assigned an **owner role** of the Azure AD application to be able to create or modify the secure webhook action in the action group.
- > [!NOTE]
- > Service principle need to be a member of **owner role** of Azure AD application to be able to create or modify the Secure Webhook action in the action group.
+1. Configure the secure webhook action.
-3. Configure the Action Group Secure Webhook action.
- - Copy the value $myApp.ObjectId from the script and enter it in the Application Object ID field in the Webhook action definition.
+ 1. Copy the `$myApp.ObjectId` value that's in the script.
+ 1. In the webhook action definition, in the **Object Id** box, enter the value that you copied.
- ![Secure Webhook action](./media/action-groups/action-groups-secure-webhook.png)
+ :::image type="content" source="./media/action-groups/action-groups-secure-webhook.png" alt-text="Screenshot of the Secured Webhook dialog box in the Azure portal. The Object ID box is visible." border="true":::
-#### Secure Webhook PowerShell Script
+#### Secure webhook PowerShell script
```PowerShell Connect-AzureAD -TenantId "<provide your Azure AD tenant ID here>"
-# This is your Azure AD Application's ObjectId.
+# Define your Azure AD application's ObjectId.
$myAzureADApplicationObjectId = "<the Object ID of your Azure AD Application>"
-# This is the Action Group Azure AD AppId
+# Define the action group Azure AD AppId.
$actionGroupsAppId = "461e8683-5575-4561-ac7f-899cc907d62a"
-# This is the name of the new role we will add to your Azure AD Application
+# Define the name of the new role that gets added to your Azure AD application.
$actionGroupRoleName = "ActionGroupsSecureWebhook"
-# Create an application role of given name and description
+# Create an application role with the given name and description.
Function CreateAppRole([string] $Name, [string] $Description) { $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole
Function CreateAppRole([string] $Name, [string] $Description)
return $appRole }
-# Get my Azure AD Application, it's roles and service principal
+# Get your Azure AD application, its roles, and its service principal.
$myApp = Get-AzureADApplication -ObjectId $myAzureADApplicationObjectId $myAppRoles = $myApp.AppRoles $actionGroupsSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $actionGroupsAppId + "'")
$actionGroupsSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $actionGro
Write-Host "App Roles before addition of new role.." Write-Host $myAppRoles
-# Create the role if it doesn't exist
+# Create the role if it doesn't exist.
if ($myAppRoles -match "ActionGroupsSecureWebhook") { Write-Host "The Action Group role is already defined.`n"
else
{ $myServicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $myApp.AppId + "'")
- # Add our new role to the Azure AD Application
+ # Add the new role to the Azure AD application.
$newRole = CreateAppRole -Name $actionGroupRoleName -Description "This is a role for Action Group to join" $myAppRoles.Add($newRole) Set-AzureADApplication -ObjectId $myApp.ObjectId -AppRoles $myAppRoles }
-# Create the service principal if it doesn't exist
+# Create the service principal if it doesn't exist.
if ($actionGroupsSP -match "AzNS AAD Webhook") { Write-Host "The Service principal is already defined.`n" } else {
- # Create a service principal for the Action Group Azure AD Application and add it to the role
+ # Create a service principal for the action group Azure AD application and add it to the role.
$actionGroupsSP = New-AzureADServicePrincipal -AppId $actionGroupsAppId }
Write-Host $myApp.AppRoles
``` ### SMS
-See the [rate limiting information](./alerts-rate-limiting.md) and [SMS alert behavior](./alerts-sms-behavior.md) for additional important information.
-You may have a limited number of SMS actions in an Action Group.
+For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+
+For important information about using SMS notifications in action groups, see [SMS alert behavior in action groups](./alerts-sms-behavior.md).
+
+You might have a limited number of SMS actions per action group.
> [!NOTE]
-> If the Azure portal Action Group user interface does not let you select your country/region code, then SMS is not supported for your country/region. If your country/region code is not available, you can vote to have your country/region added at [user voice](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, a work around is to have your Action Group call a webhook to a third-party SMS provider with support in your country/region.
+>
+> If you can't select your country/region code in the Azure portal, SMS isn't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party SMS provider that offers support in your country/region.
-Pricing for supported countries/regions is listed in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+For information about pricing for supported countries/regions, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-**List of Countries where SMS Notification is supported**
+#### Countries with SMS notification support
-| Country Code | Country Name |
+| Country code | Country |
|:|:| | 61 | Australia | | 43 | Austria |
Pricing for supported countries/regions is listed in the [Azure Monitor pricing
| 1 | United States | ### Voice
-See the [rate limiting information](./alerts-rate-limiting.md) article for additional important behavior.
-You may have a limited number of Voice actions in an Action Group.
+For important information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+
+You might have a limited number of voice actions per action group.
> [!NOTE]
-> If the Azure portal Action Group user interface does not let you select your country/region code, then voice calls are not supported for your country/region. If your country/region code is not available, you can vote to have your country/region added at [user voice](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, a work around is to have your Action Group call a webhook to a third-party voice call provider with support in your country/region.
-> Only Country code supported today in Azure portal Action Group for Voice Notification is +1(United States).
+>
+> If you can't select your country/region code in the Azure portal, voice calls aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party voice call provider that offers support in your country/region.
+>
+> The only country code that action groups currently support for voice notification is +1 for the United States.
-Pricing for supported countries/regions is listed in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+For information about pricing for supported countries/regions, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
### Webhook > [!NOTE]
-> Using the webhook action requires that the target webhook endpoint be capable of processing the various JSON payloads emitted by different alert sources.
-> If the webhook endpoint is expecting a specific schema (for example Microsoft Teams) you should use the Logic App action to transform the alert schema to meet the target webhook's expectations.
+>
+> If you use the webhook action, your target webhook endpoint needs to be able to process the various JSON payloads that different alert sources emit. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
+
+Webhook action groups use the following rules:
-Webhooks are processed using the following rules
-- A webhook call is attempted a maximum of three times.-- The call will be retried if a response is not received within the timeout period or one of the following HTTP status codes is returned: 408, 429, 503 or 504.-- The first call will wait 10 seconds for a response.-- The second and third attempts will wait 30 seconds for a response.-- After the three attempts to call the webhook have failed no Action Group will call the endpoint for 15 minutes.
+- A webhook call is attempted at most three times.
-Please see [Action Group IP Addresses](../app/ip-addresses.md) for source IP address ranges.
+- The first call waits 10 seconds for a response.
+- The second and third attempts wait 30 seconds for a response.
+
+- The call is retried if any of the following conditions are met:
+
+ - A response isn't received within the timeout period.
+ - One of the following HTTP status codes is returned: 408, 429, 503, or 504.
+
+- If three attempts to call the webhook fail, no action group calls the endpoint for 15 minutes.
+
+For source IP address ranges, see [Action group IP addresses](../app/ip-addresses.md).
## Next steps
-* Learn more about [SMS alert behavior](./alerts-sms-behavior.md).
-* Gain an [understanding of the activity log alert webhook schema](./activity-log-alerts-webhook.md).
-* Learn more about [ITSM Connector](./itsmc-overview.md).
-* Learn more about [rate limiting](./alerts-rate-limiting.md) on alerts.
-* Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.
-* Learn how to [configure alerts whenever a service health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
+
+- Learn more about [SMS alert behavior](./alerts-sms-behavior.md).
+- Gain an [understanding of the activity log alert webhook schema](./activity-log-alerts-webhook.md).
+- Learn more about [ITSM Connector](./itsmc-overview.md).
+- Learn more about [rate limiting](./alerts-rate-limiting.md) on alerts.
+- Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.
+- Learn how to [configure alerts whenever a Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Metric counts such as request rate and exception rate are adjusted to compensate
> [!NOTE] > This section applies to ASP.NET applications, not to ASP.NET Core applications. [Learn about configuring adaptive sampling for ASP.NET Core applications later in this document.](#configuring-adaptive-sampling-for-aspnet-core-applications)
+> With ASP.NET Core and with Microsoft.ApplicationInsights.AspNetCore >= 2.15.0 you can configure AppInsights options via appsettings.json
+ In [`ApplicationInsights.config`](./configuration-with-applicationinsights-config.md), you can adjust several parameters in the `AdaptiveSamplingTelemetryProcessor` node. The figures shown are the default values: * `<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>`
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
N/A
|Metric Name|Unit|Supported dimensions| |--|--|--| |Request Success Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Requests Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
+|Requests Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`|
|Request Duration|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Retry Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
+|Retry Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, , `Status Code`|
+|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`|
+|Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Exception Type`|
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] #### Attach Statsbeat
You can also disable this feature by setting the environment variable `APPLICATI
#### [Node](#tab/node)
-N/A
+Not supported yet.
#### [Python](#tab/python)
-N/A
+Not supported yet.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 04/12/2022 Last updated : 06/01/2022
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analyti
This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table.
+## Microsoft.AAD/DomainServices
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|\DirectoryServices(NTDS)\LDAP Searches/sec|Yes|NTDS - LDAP Searches/sec|CountPerSecond|Average|This metric indicates the average number of searches per second for the NTDS object. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\DirectoryServices(NTDS)\LDAP Successful Binds/sec|Yes|NTDS - LDAP Successful Binds/sec|CountPerSecond|Average|This metric indicates the number of LDAP successful binds per second for the NTDS object. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\DNS\Total Query Received/sec|Yes|DNS - Total Query Received/sec|CountPerSecond|Average|This metric indicates the average number of queries received by DNS server in each second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\DNS\Total Response Sent/sec|Yes|Total Response Sent/sec|CountPerSecond|Average|This metric indicates the average number of reponses sent by DNS server in each second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Memory\% Committed Bytes In Use|Yes|% Committed Bytes In Use|Percent|Average|This metric indicates the ratio of Memory\Committed Bytes to the Memory\Commit Limit. Committed memory is the physical memory in use for which space has been reserved in the paging file should it need to be written to disk. The commit limit is determined by the size of the paging file. If the paging file is enlarged, the commit limit increases, and the ratio is reduced. This counter displays the current percentage value only; it is not an average. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Process(dns)\% Processor Time|Yes|% Processor Time (dns)|Percent|Average|This metric indicates the percentage of elapsed time that all of dns process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Process(lsass)\% Processor Time|Yes|% Processor Time (lsass)|Percent|Average|This metric indicates the percentage of elapsed time that all of lsass process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Processor(_Total)\% Processor Time|Yes|Total Processor Time|Percent|Average|This metric indicates the percentage of elapsed time that the processor spends to execute a non-Idle thread. It is calculated by measuring the percentage of time that the processor spends executing the idle thread and then subtracting that value from 100%. (Each processor has an idle thread that consumes cycles when no other threads are ready to run). This counter is the primary indicator of processor activity, and displays the average percentage of busy time observed during the sample interval. It should be noted that the accounting calculation of whether the processor is idle is performed at an internal sampling interval of the system clock (10ms). On todays fast processors, % Processor Time can therefore underestimate the processor utilization as the processor may be spending a lot of time servicing threads between the system clock sampling interval. Workload based timer applications are one example of applications which are more likely to be measured inaccurately as timers are signaled just after the sample is taken. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Security System-Wide Statistics\Kerberos Authentications|Yes|Kerberos Authentications|CountPerSecond|Average|This metric indicates the number of times that clients use a ticket to authenticate to this computer per second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+|\Security System-Wide Statistics\NTLM Authentications|Yes|NTLM Authentications|CountPerSecond|Average|This metric indicates the number of NTLM authentications processed per second for the Active Directory on this domain contrller or for local accounts on this member server. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance.|No Dimensions|
+ ## microsoft.aadiam/azureADMetrics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ThrottledRequests|No|ThrottledRequests|Count|Average|azureADMetrics type metric|No Dimensions|
+|CACompliantDeviceSuccessCount|Yes|CACompliantDeviceSuccessCount|Count|Count|CA comliant device scuccess count for Azure AD|No Dimensions|
+|CAManagedDeviceSuccessCount|No|CAManagedDeviceSuccessCount|Count|Count|CA domain join device success count for Azure AD|No Dimensions|
+|MFAAttemptCount|No|MFAAttemptCount|Count|Count|MFA attempt count for Azure AD|No Dimensions|
+|MFAFailureCount|No|MFAFailureCount|Count|Count|MFA failure count for Azure AD|No Dimensions|
+|MFASuccessCount|No|MFASuccessCount|Count|Count|MFA success count for Azure AD|No Dimensions|
## Microsoft.AnalysisServices/servers
This latest update adds a new column and reorders the metrics to be alphabetical
|WebSocketMessages|Yes|WebSocket Messages (Preview)|Count|Total|Count of WebSocket messages based on selected source and destination|Location, Source, Destination|
+## Microsoft.App/containerapps
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|Replicas|Yes|Replica Count|Count|Maximum|Number of replicas count of container app|revisionName|
+|Requests|Yes|Requests|Count|Total|Requests processed|revisionName, podName, statusCodeCategory, statusCode|
+|RestartCount|Yes|Replica Restart Count|Count|Maximum|Restart count of container app replicas|revisionName, podName|
+|RxBytes|Yes|Network In Bytes|Bytes|Total|Network received bytes|revisionName, podName|
+|TxBytes|Yes|Network Out Bytes|Bytes|Total|Network transmitted bytes|revisionName, podName|
+|UsageNanoCores|Yes|CPU Usage|NanoCores|Average|CPU consumed by the container app, in nano cores. 1,000,000,000 nano cores = 1 core|revisionName, podName|
+|WorkingSetBytes|Yes|Memory Working Set Bytes|Bytes|Average|Container App working set memory used in bytes.|revisionName, podName|
++ ## Microsoft.AppConfiguration/configurationStores |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |APIRequestAuthentication|No|Authentication API Requests|Count|Count|Count of all requests against the Communication Services Authentication endpoint.|Operation, StatusCode, StatusCodeClass|
+|APIRequestCallRecording|Yes|Call Recording API Requests|Count|Count|Count of all requests against the Communication Services Call Recording endpoint.|Operation, StatusCode, StatusCodeClass|
|APIRequestChat|Yes|Chat API Requests|Count|Count|Count of all requests against the Communication Services Chat endpoint.|Operation, StatusCode, StatusCodeClass| |APIRequestNetworkTraversal|No|Network Traversal API Requests|Count|Count|Count of all requests against the Communication Services Network Traversal endpoint.|Operation, StatusCode, StatusCodeClass| |APIRequestSMS|Yes|SMS API Requests|Count|Count|Count of all requests against the Communication Services SMS endpoint.|Operation, StatusCode, StatusCodeClass, ErrorCode, NumberType|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Composite Disk Read Bytes/sec|No|Disk Read Bytes/sec(Preview)|Bytes|Average|Bytes/sec read from disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
-|Composite Disk Read Operations/sec|No|Disk Read Operations/sec(Preview)|Bytes|Average|Number of read IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
-|Composite Disk Write Bytes/sec|No|Disk Write Bytes/sec(Preview)|Bytes|Average|Bytes/sec written to disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
-|Composite Disk Write Operations/sec|No|Disk Write Operations/sec(Preview)|Bytes|Average|Number of Write IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|Composite Disk Read Bytes/sec|No|Disk Read Bytes/sec(Preview)|BytesPerSecond|Average|Bytes/sec read from disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|Composite Disk Read Operations/sec|No|Disk Read Operations/sec(Preview)|CountPerSecond|Average|Number of read IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|Composite Disk Write Bytes/sec|No|Disk Write Bytes/sec(Preview)|BytesPerSecond|Average|Bytes/sec written to disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|Composite Disk Write Operations/sec|No|Disk Write Operations/sec(Preview)|CountPerSecond|Average|Number of Write IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions|
+|DiskPaidBurstIOPS|No|Disk On-demand Burst Operations(Preview)|Count|Average|The accumulated operations of burst transactions used for disks with on-demand burst enabled. Emitted on an hour interval|No Dimensions|
## Microsoft.Compute/virtualMachines
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Cached IOPS Consumed Percentage|Yes|VM Cached IOPS Consumed Percentage|Percent|Average|Percentage of cached disk IOPS consumed by the VM|No Dimensions| |VM Uncached Bandwidth Consumed Percentage|Yes|VM Uncached Bandwidth Consumed Percentage|Percent|Average|Percentage of uncached disk bandwidth consumed by the VM|No Dimensions| |VM Uncached IOPS Consumed Percentage|Yes|VM Uncached IOPS Consumed Percentage|Percent|Average|Percentage of uncached disk IOPS consumed by the VM|No Dimensions|
+|VmAvailabilityMetric|Yes|VM Availability Metric (Preview)|Count|Average|Measure of Availability of Virtual machines over time. Note: This metric is previewed to only a small set of customers at the moment, as we prioritize improving data quality and consistency. As we improve our data standard, we will be rolling out this feature fleetwide in a phased manner.|No Dimensions|
-## Microsoft.Compute/virtualMachineScaleSets
+## Microsoft.Compute/virtualmachineScaleSets
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Cached IOPS Consumed Percentage|Yes|VM Cached IOPS Consumed Percentage|Percent|Average|Percentage of cached disk IOPS consumed by the VM|VMName| |VM Uncached Bandwidth Consumed Percentage|Yes|VM Uncached Bandwidth Consumed Percentage|Percent|Average|Percentage of uncached disk bandwidth consumed by the VM|VMName| |VM Uncached IOPS Consumed Percentage|Yes|VM Uncached IOPS Consumed Percentage|Percent|Average|Percentage of uncached disk IOPS consumed by the VM|VMName|
+|VmAvailabilityMetric|Yes|VM Availability Metric (Preview)|Count|Average|Measure of Availability of Virtual machines over time. Note: This metric is previewed to only a small set of customers at the moment, as we prioritize improving data quality and consistency. As we improve our data standard, we will be rolling out this feature fleetwide in a phased manner.|VMName|
## Microsoft.Compute/virtualMachineScaleSets/virtualMachines
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |egressbps|Yes|Egress Mbps|BitsPerSecond|Average|Egress Throughput|cachenodeid|
-|hitRatio|Yes|Hit Ratio|Percent|Average|Hit Ratio|cachenodeid|
+|hitRatio|Yes|Cache Efficiency|Percent|Average|Cache Efficiency|cachenodeid|
|hits|Yes|Hits|Count|Count|Count of hits|cachenodeid| |hitsbps|Yes|Hit Mbps|BitsPerSecond|Average|Hit Throughput|cachenodeid| |misses|Yes|Misses|Count|Count|Count of misses|cachenodeid|
This latest update adds a new column and reorders the metrics to be alphabetical
|WriteRequests|Yes|Write Requests|Count|Total|Count of data write requests to the account.|No Dimensions|
+## Microsoft.DataProtection/BackupVaults
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|BackupHealthEvent|Yes|Backup Health Events (preview)|Count|Count|The count of health events pertaining to backup job health|dataSourceURL, backupInstanceUrl, dataSourceType, healthStatus, backupInstanceName|
+|RestoreHealthEvent|Yes|Restore Health Events (preview)|Count|Count|The count of health events pertaining to restore job health|dataSourceURL, backupInstanceUrl, dataSourceType, healthStatus, backupInstanceName|
++ ## Microsoft.DataShare/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|DataUsage|No|Data Usage|Bytes|Total|Total data usage reported at 5 minutes granularity|CollectionName, DatabaseName, Region| |DedicatedGatewayAverageCPUUsage|No|DedicatedGatewayAverageCPUUsage|Percent|Average|Average CPU usage across dedicated gateway instances|Region, MetricType| |DedicatedGatewayAverageMemoryUsage|No|DedicatedGatewayAverageMemoryUsage|Bytes|Average|Average memory usage across dedicated gateway instances, which is used for both routing requests and caching data|Region|
+|DedicatedGatewayCPUUsage|No|DedicatedGatewayCPUUsage|Percent|Average|CPU usage across dedicated gateway instances|Region, ApplicationType|
|DedicatedGatewayMaximumCPUUsage|No|DedicatedGatewayMaximumCPUUsage|Percent|Average|Average Maximum CPU usage across dedicated gateway instances|Region, MetricType|
+|DedicatedGatewayMemoryUsage|No|DedicatedGatewayMemoryUsage|Bytes|Average|Memory usage across dedicated gateway instances|Region, ApplicationType|
|DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region, CacheHit| |DeleteAccount|Yes|Account Deleted|Count|Count|Account Deleted|No Dimensions| |DocumentCount|No|Document Count|Count|Total|Total document count reported at 5 minutes, 1 hour and 1 day granularity|CollectionName, DatabaseName, Region|
This latest update adds a new column and reorders the metrics to be alphabetical
|NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId, CollectionRid| |OfflineRegion|No|Region Offlined|Count|Count|Region Offlined|Region, StatusCode, Role, OperationName| |OnlineRegion|No|Region Onlined|Count|Count|Region Onlined|Region, StatusCode, Role, OperationName|
+|PhysicalPartitionThroughputInfo|No|Physical Partition Throughput|Count|Maximum|Physical Partition Throughput|CollectionName, DatabaseName, PhysicalPartitionId, OfferOwnerRid, Region|
|ProvisionedThroughput|No|Provisioned Throughput|Count|Maximum|Provisioned Throughput|DatabaseName, CollectionName| |RegionFailover|Yes|Region Failed Over|Count|Count|Region Failed Over|No Dimensions| |RemoveRegion|Yes|Region Removed|Count|Count|Region Removed|Region|
This latest update adds a new column and reorders the metrics to be alphabetical
|TableTableThroughputUpdate|No|AzureTable Table Throughput Updated|Count|Count|AzureTable Table Throughput Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest| |TableTableUpdate|No|AzureTable Table Updated|Count|Count|AzureTable Table Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType| |TotalRequests|Yes|Total Requests|Count|Count|Number of requests made|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status|
+|TotalRequestsPreview|No|Total Requests (Preview)|Count|Count|Number of requests made with CapacityType|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType|
|TotalRequestUnits|Yes|Total Request Units|Count|Total|Request Units consumed|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status|
+|TotalRequestUnitsPreview|No|Total Request Units (Preview)|Count|Total|Request Units consumed with CapacityType|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType|
|UpdateAccountKeys|Yes|Account Keys Updated|Count|Count|Account Keys Updated|KeyType| |UpdateAccountNetworkSettings|Yes|Account Network Settings Updated|Count|Count|Account Network Settings Updated|No Dimensions| |UpdateAccountReplicationSettings|Yes|Account Replication Settings Updated|Count|Count|Account Replication Settings Updated|No Dimensions| |UpdateDiagnosticsSettings|No|Account Diagnostic Settings Updated|Count|Count|Account Diagnostic Settings Updated|DiagnosticSettingsName, ResourceGroupName|
+## microsoft.edgezones/edgezones
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|TotalVcoreCapacity|Yes|Total VCore Capacity|Count|Average|The total capacity of the General-Purpose Compute vcore in Edge Zone Enterprise site. |No Dimensions|
+|VcoresUsage|Yes|Vcore Usage Percentage|Percent|Average|The utilization of the General-Purpose Compute vcores in Edge Zone Enterprise site |No Dimensions|
++ ## Microsoft.EventGrid/domains |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ActiveConnections|No|ActiveConnections|Count|Maximum|Total Active Connections for Microsoft.EventHub.|No Dimensions|
+|ActiveConnections|No|ActiveConnections|Count|Average|Total Active Connections for Microsoft.EventHub.|No Dimensions|
|AvailableMemory|No|Available Memory|Percent|Maximum|Available memory for the Event Hub Cluster as a percentage of total memory.|Role| |CaptureBacklog|No|Capture Backlog.|Count|Total|Capture Backlog for Microsoft.EventHub.|No Dimensions| |CapturedBytes|No|Captured Bytes.|Bytes|Total|Captured Bytes for Microsoft.EventHub.|No Dimensions| |CapturedMessages|No|Captured Messages.|Count|Total|Captured Messages for Microsoft.EventHub.|No Dimensions|
-|ConnectionsClosed|No|Connections Closed.|Count|Maximum|Connections Closed for Microsoft.EventHub.|No Dimensions|
-|ConnectionsOpened|No|Connections Opened.|Count|Maximum|Connections Opened for Microsoft.EventHub.|No Dimensions|
+|ConnectionsClosed|No|Connections Closed.|Count|Average|Connections Closed for Microsoft.EventHub.|No Dimensions|
+|ConnectionsOpened|No|Connections Opened.|Count|Average|Connections Opened for Microsoft.EventHub.|No Dimensions|
|CPU|No|CPU|Percent|Maximum|CPU utilization for the Event Hub Cluster as a percentage|Role| |IncomingBytes|Yes|Incoming Bytes.|Bytes|Total|Incoming Bytes for Microsoft.EventHub.|No Dimensions| |IncomingMessages|Yes|Incoming Messages|Count|Total|Incoming Messages for Microsoft.EventHub.|No Dimensions| |IncomingRequests|Yes|Incoming Requests|Count|Total|Incoming Requests for Microsoft.EventHub.|No Dimensions| |OutgoingBytes|Yes|Outgoing Bytes.|Bytes|Total|Outgoing Bytes for Microsoft.EventHub.|No Dimensions| |OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.EventHub.|No Dimensions|
-|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|No Dimensions|
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|No Dimensions|
+|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|OperationResult|
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|OperationResult|
|Size|No|Size|Bytes|Average|Size of an EventHub in Bytes.|Role|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|No Dimensions|
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|No Dimensions|
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|No Dimensions|
+|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|OperationResult|
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|OperationResult|
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|OperationResult|
## Microsoft.EventHub/Namespaces
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|HyperVVirtualProcessorUtilization|Yes|Average CPU Utilization|Percent|Average|Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU.|InstanceName
+|HyperVVirtualProcessorUtilization|Yes|Average CPU Utilization|Percent|Average|Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU.|InstanceName|
## microsoft.insights/autoscalesettings
This latest update adds a new column and reorders the metrics to be alphabetical
|capacity_cpu_cores|Yes|Total number of cpu cores in a connected cluster|Count|Total|Total number of cpu cores in a connected cluster|No Dimensions|
-## Microsoft.Kusto/Clusters
+## Microsoft.Kusto/clusters
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|MaterializedViewHealth|Yes|Materialized View Health|Count|Average|The health of the materialized view (1 for healthy, 0 for non-healthy)|Database, MaterializedViewName| |MaterializedViewRecordsInDelta|Yes|Materialized View Records In Delta|Count|Average|The number of records in the non-materialized part of the view|Database, MaterializedViewName| |MaterializedViewResult|Yes|Materialized View Result|Count|Average|The result of the materialization process|Database, MaterializedViewName, Result|
-|QueryDuration|Yes|Query duration|Milliseconds|Average|Queries' duration in seconds|QueryStatus|
+|QueryDuration|Yes|Query duration|MilliSeconds|Average|Queries' duration in seconds|QueryStatus|
|QueryResult|No|Query Result|Count|Count|Total number of queries.|QueryStatus| |QueueLength|Yes|Queue Length|Count|Average|Number of pending messages in a component's queue.|ComponentType| |QueueOldestMessage|Yes|Queue Oldest Message|Count|Average|Time in seconds from when the oldest message in queue was inserted.|ComponentType| |ReceivedDataSizeBytes|Yes|Received Data Size Bytes|Bytes|Average|Size of data received by data connection. This is the size of the data stream, or of raw data size if provided.|ComponentType, ComponentName| |StageLatency|Yes|Stage Latency|Seconds|Average|Cumulative time from when a message is discovered until it is received by the reporting component for processing (discovery time is set when message is enqueued for ingestion queue, or when discovered by data connection).|Database, ComponentType|
-|SteamingIngestRequestRate|Yes|Streaming Ingest Request Rate|Count|RateRequestsPerSecond|Streaming ingest request rate (requests per second)|No Dimensions|
|StreamingIngestDataRate|Yes|Streaming Ingest Data Rate|Count|Average|Streaming ingest data rate (MB per second)|No Dimensions|
-|StreamingIngestDuration|Yes|Streaming Ingest Duration|Milliseconds|Average|Streaming ingest duration in milliseconds|No Dimensions|
+|StreamingIngestDuration|Yes|Streaming Ingest Duration|MilliSeconds|Average|Streaming ingest duration in milliseconds|No Dimensions|
|StreamingIngestResults|Yes|Streaming Ingest Result|Count|Count|Streaming ingest result|Result| |TotalNumberOfConcurrentQueries|Yes|Total number of concurrent queries|Count|Maximum|Total number of concurrent queries|No Dimensions| |TotalNumberOfExtents|Yes|Total number of extents|Count|Average|Total number of data extents|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|WeakConsistencyLatency|Yes|Weak consistency latency|Seconds|Average|The max latency between the previous metadata sync and the next one (in DB/node scope)|Database, RoleInstance|
-## Microsoft.Logic/integrationServiceEnvironments
+## Microsoft.Logic/IntegrationServiceEnvironments
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|ActionsStarted|Yes|Actions Started |Count|Total|Number of workflow actions started.|No Dimensions| |ActionsSucceeded|Yes|Actions Succeeded |Count|Total|Number of workflow actions succeeded.|No Dimensions| |ActionSuccessLatency|Yes|Action Success Latency |Seconds|Average|Latency of succeeded workflow actions.|No Dimensions|
-|ActionThrottledEvents|Yes|Action Throttled Events|Count|Total|Number of workflow action throttled events..|No Dimensions|
|IntegrationServiceEnvironmentConnectorMemoryUsage|Yes|Connector Memory Usage for Integration Service Environment|Percent|Average|Connector memory usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentConnectorProcessorUsage|Yes|Connector Processor Usage for Integration Service Environment|Percent|Average|Connector processor usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentWorkflowMemoryUsage|Yes|Workflow Memory Usage for Integration Service Environment|Percent|Average|Workflow memory usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentWorkflowProcessorUsage|Yes|Workflow Processor Usage for Integration Service Environment|Percent|Average|Workflow processor usage for integration service environment.|No Dimensions|
-|RunFailurePercentage|Yes|Run Failure Percentage|Percent|Total|Percentage of workflow runs failed.|No Dimensions|
|RunLatency|Yes|Run Latency|Seconds|Average|Latency of completed workflow runs.|No Dimensions| |RunsCancelled|Yes|Runs Cancelled|Count|Total|Number of workflow runs cancelled.|No Dimensions| |RunsCompleted|Yes|Runs Completed|Count|Total|Number of workflow runs completed.|No Dimensions| |RunsFailed|Yes|Runs Failed|Count|Total|Number of workflow runs failed.|No Dimensions| |RunsStarted|Yes|Runs Started|Count|Total|Number of workflow runs started.|No Dimensions| |RunsSucceeded|Yes|Runs Succeeded|Count|Total|Number of workflow runs succeeded.|No Dimensions|
-|RunStartThrottledEvents|Yes|Run Start Throttled Events|Count|Total|Number of workflow run start throttled events.|No Dimensions|
|RunSuccessLatency|Yes|Run Success Latency|Seconds|Average|Latency of succeeded workflow runs.|No Dimensions|
-|RunThrottledEvents|Yes|Run Throttled Events|Count|Total|Number of workflow action or trigger throttled events.|No Dimensions|
|TriggerFireLatency|Yes|Trigger Fire Latency |Seconds|Average|Latency of fired workflow triggers.|No Dimensions| |TriggerLatency|Yes|Trigger Latency |Seconds|Average|Latency of completed workflow triggers.|No Dimensions| |TriggersCompleted|Yes|Triggers Completed |Count|Total|Number of workflow triggers completed.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TriggersStarted|Yes|Triggers Started |Count|Total|Number of workflow triggers started.|No Dimensions| |TriggersSucceeded|Yes|Triggers Succeeded |Count|Total|Number of workflow triggers succeeded.|No Dimensions| |TriggerSuccessLatency|Yes|Trigger Success Latency |Seconds|Average|Latency of succeeded workflow triggers.|No Dimensions|
-|TriggerThrottledEvents|Yes|Trigger Throttled Events|Count|Total|Number of workflow trigger throttled events.|No Dimensions|
## Microsoft.Logic/Workflows
This latest update adds a new column and reorders the metrics to be alphabetical
|CapacityUnits|No|Current Capacity Units|Count|Average|Capacity Units consumed|No Dimensions| |ClientRtt|No|Client RTT|MilliSeconds|Average|Average round trip time between clients and Application Gateway. This metric indicates how long it takes to establish connections and return acknowledgements|Listener| |ComputeUnits|No|Current Compute Units|Count|Average|Compute Units consumed|No Dimensions|
+|ConnectionLifetime|No|Connection Lifetime|MilliSeconds|Average|Average time duration from the start of a new connection to its termination|Listener|
|CpuUtilization|No|CPU Utilization|Percent|Average|Current CPU utilization of the Application Gateway|No Dimensions| |CurrentConnections|Yes|Current Connections|Count|Total|Count of current connections established with Application Gateway|No Dimensions| |EstimatedBilledCapacityUnits|No|Estimated Billed Capacity Units|Count|Average|Estimated capacity units that will be charged|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|BitsOutPerSecond|Yes|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|No Dimensions|
+## Microsoft.Network/dnsForwardingRulesets
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|ForwardingRuleCount|Yes|Forwarding Rule Count|Count|Maximum|This metric indicates the number of forwarding rules present in each DNS forwarding ruleset.|No Dimensions|
+|VirtualNetworkLinkCount|Yes|Virtual Network Link Count|Count|Maximum|This metric indicates the number of associated virtual network links to a DNS forwarding ruleset.|No Dimensions|
++
+## Microsoft.Network/dnsResolvers
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|InboundEndpointCount|Yes|Inbound Endpoint Count|Count|Maximum|This metric indicates the number of inbound endpoints created for a DNS Resolver.|No Dimensions|
+|OutboundEndpointCount|Yes|Outbound Endpoint Count|Count|Maximum|This metric indicates the number of outbound endpoints created for a DNS Resolver.|No Dimensions|
++ ## Microsoft.Network/dnszones |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|BitsOutPerSecond|Yes|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|No Dimensions|
-## Microsoft.Network/expressRouteGateways
+## microsoft.network/expressroutegateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ErGatewayConnectionBitsInPerSecond|No|BitsInPerSecond|BitsPerSecond|Average|Bits ingressing Azure per second|ConnectionName|
-|ErGatewayConnectionBitsOutPerSecond|No|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|ConnectionName|
-|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
+|ErGatewayConnectionBitsInPerSecond|No|Bits In Per Second|BitsPerSecond|Average|Bits per second ingressing Azure via ExpressRoute Gateway which can be further split for specific connections|ConnectionName|
+|ErGatewayConnectionBitsOutPerSecond|No|Bits Out Per Second|BitsPerSecond|Average|Bits per second egressing Azure via ExpressRoute Gateway which can be further split for specific connections|ConnectionName|
+|ExpressRouteGatewayBitsPerSecond|No|Bits Received Per second|BitsPerSecond|Average|Total Bits received on ExpressRoute Gateway per second|roleInstance|
+|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRoute Gateway|roleInstance|
|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Percent|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance| |ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance| |ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
-|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayPacketsPerSecond|No|Packets received per second|CountPerSecond|Average|Total Packets received on ExpressRoute Gateway per second|roleInstance|
## Microsoft.Network/expressRoutePorts
This latest update adds a new column and reorders the metrics to be alphabetical
|BgpPeerStatus|No|Bgp Peer Status|Count|Maximum|1 - Connected, 0 - Not connected|routeserviceinstance, bgppeerip, bgppeertype| |CountOfRoutesAdvertisedToPeer|No|Count Of Routes Advertised To Peer|Count|Maximum|Total number of routes advertised to peer|routeserviceinstance, bgppeerip, bgppeertype| |CountOfRoutesLearnedFromPeer|No|Count Of Routes Learned From Peer|Count|Maximum|Total number of routes learned from peer|routeserviceinstance, bgppeerip, bgppeertype|
+|VirtualHubDataProcessed|No|Data Processed by the Virtual Hub Router|Bytes|Total|Data Processed by the Virtual Hub Router|No Dimensions|
## microsoft.network/virtualnetworkgateways
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processes|Yes|Processes|Count|Average|Average_Processes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Users|Yes|Users|Count|Average|Average_Users. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Event|Yes|Event|Count|Average|Event. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
-|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, OSType, Version, SourceComputerId|
-|Update|Yes|Update|Count|Average|Update. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, Product, Classification, UpdateState, Optional, Approved|
+|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processes|Yes|Processes|Count|Average|Average_Processes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Users|Yes|Users|Count|Average|Average_Users. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Event|Yes|Event|Count|Average|Event. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
+|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, OSType, Version, SourceComputerId|
+|Update|Yes|Update|Count|Average|Update. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, Product, Classification, UpdateState, Optional, Approved|
## Microsoft.Peering/peerings
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|CleanerCurrentPrice|Yes|Memory: Cleaner Current Price|Count|Average|Current price of memory, $/byte/time, normalized to 1000.|No Dimensions|
-|CleanerMemoryNonshrinkable|Yes|Memory: Cleaner Memory nonshrinkable|Bytes|Average|Amount of memory, in bytes, not subject to purging by the background cleaner.|No Dimensions|
-|CleanerMemoryShrinkable|Yes|Memory: Cleaner Memory shrinkable|Bytes|Average|Amount of memory, in bytes, subject to purging by the background cleaner.|No Dimensions|
-|CommandPoolBusyThreads|Yes|Threads: Command pool busy threads|Count|Average|Number of busy threads in the command thread pool.|No Dimensions|
-|CommandPoolIdleThreads|Yes|Threads: Command pool idle threads|Count|Average|Number of idle threads in the command thread pool.|No Dimensions|
-|CommandPoolJobQueueLength|Yes|Command Pool Job Queue Length|Count|Average|Number of jobs in the queue of the command thread pool.|No Dimensions|
|cpu_metric|Yes|CPU (Gen2)|Percent|Average|CPU Utilization. Supported only for Power BI Embedded Generation 2 resources.|No Dimensions| |cpu_workload_metric|Yes|CPU Per Workload (Gen2)|Percent|Average|CPU Utilization Per Workload. Supported only for Power BI Embedded Generation 2 resources.|Workload|
-|CurrentConnections|Yes|Connection: Current connections|Count|Average|Current number of client connections established.|No Dimensions|
-|CurrentUserSessions|Yes|Current User Sessions|Count|Average|Current number of user sessions established.|No Dimensions|
-|LongParsingBusyThreads|Yes|Threads: Long parsing busy threads|Count|Average|Number of busy threads in the long parsing thread pool.|No Dimensions|
-|LongParsingIdleThreads|Yes|Threads: Long parsing idle threads|Count|Average|Number of idle threads in the long parsing thread pool.|No Dimensions|
-|LongParsingJobQueueLength|Yes|Threads: Long parsing job queue length|Count|Average|Number of jobs in the queue of the long parsing thread pool.|No Dimensions|
-|memory_metric|Yes|Memory (Gen1)|Bytes|Average|Memory. Range 0-3 GB for A1, 0-5 GB for A2, 0-10 GB for A3, 0-25 GB for A4, 0-50 GB for A5 and 0-100 GB for A6. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|memory_thrashing_metric|Yes|Memory Thrashing (Datasets) (Gen1)|Percent|Average|Average memory thrashing. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|MemoryLimitHard|Yes|Memory: Memory Limit Hard|Bytes|Average|Hard memory limit, from configuration file.|No Dimensions|
-|MemoryLimitHigh|Yes|Memory: Memory Limit High|Bytes|Average|High memory limit, from configuration file.|No Dimensions|
-|MemoryLimitLow|Yes|Memory: Memory Limit Low|Bytes|Average|Low memory limit, from configuration file.|No Dimensions|
-|MemoryLimitVertiPaq|Yes|Memory: Memory Limit VertiPaq|Bytes|Average|In-memory limit, from configuration file.|No Dimensions|
-|MemoryUsage|Yes|Memory: Memory Usage|Bytes|Average|Memory usage of the server process as used in calculating cleaner memory price. Equal to counter Process\PrivateBytes plus the size of memory-mapped data, ignoring any memory which was mapped or allocated by the xVelocity in-memory analytics engine (VertiPaq) in excess of the xVelocity engine Memory Limit.|No Dimensions|
|overload_metric|Yes|Overload (Gen2)|Count|Average|Resource Overload, 1 if resource is overloaded, otherwise 0. Supported only for Power BI Embedded Generation 2 resources.|No Dimensions|
-|ProcessingPoolBusyIOJobThreads|Yes|Threads: Processing pool busy I/O job threads|Count|Average|Number of threads running I/O jobs in the processing thread pool.|No Dimensions|
-|ProcessingPoolBusyNonIOThreads|Yes|Threads: Processing pool busy non-I/O threads|Count|Average|Number of threads running non-I/O jobs in the processing thread pool.|No Dimensions|
-|ProcessingPoolIdleIOJobThreads|Yes|Threads: Processing pool idle I/O job threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|No Dimensions|
-|ProcessingPoolIdleNonIOThreads|Yes|Threads: Processing pool idle non-I/O threads|Count|Average|Number of idle threads in the processing thread pool dedicated to non-I/O jobs.|No Dimensions|
-|ProcessingPoolIOJobQueueLength|Yes|Threads: Processing pool I/O job queue length|Count|Average|Number of I/O jobs in the queue of the processing thread pool.|No Dimensions|
-|ProcessingPoolJobQueueLength|Yes|Processing Pool Job Queue Length|Count|Average|Number of non-I/O jobs in the queue of the processing thread pool.|No Dimensions|
-|qpu_high_utilization_metric|Yes|QPU High Utilization (Gen1)|Count|Total|QPU High Utilization In Last Minute, 1 For High QPU Utilization, Otherwise 0. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|qpu_metric|Yes|QPU (Gen1)|Count|Average|QPU. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|QueryDuration|Yes|Query Duration (Datasets) (Gen1)|Milliseconds|Average|DAX Query duration in last interval. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|QueryPoolBusyThreads|Yes|Query Pool Busy Threads|Count|Average|Number of busy threads in the query thread pool.|No Dimensions|
-|QueryPoolIdleThreads|Yes|Threads: Query pool idle threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|No Dimensions|
-|QueryPoolJobQueueLength|Yes|Query Pool Job Queue Length (Datasets) (Gen1)|Count|Average|Number of jobs in the queue of the query thread pool. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-|Quota|Yes|Memory: Quota|Bytes|Average|Current memory quota, in bytes. Memory quota is also known as a memory grant or memory reservation.|No Dimensions|
-|QuotaBlocked|Yes|Memory: Quota Blocked|Count|Average|Current number of quota requests that are blocked until other memory quotas are freed.|No Dimensions|
-|RowsConvertedPerSec|Yes|Processing: Rows converted per sec|CountPerSecond|Average|Rate of rows converted during processing.|No Dimensions|
-|RowsReadPerSec|Yes|Processing: Rows read per sec|CountPerSecond|Average|Rate of rows read from all relational databases.|No Dimensions|
-|RowsWrittenPerSec|Yes|Processing: Rows written per sec|CountPerSecond|Average|Rate of rows written during processing.|No Dimensions|
-|ShortParsingBusyThreads|Yes|Threads: Short parsing busy threads|Count|Average|Number of busy threads in the short parsing thread pool.|No Dimensions|
-|ShortParsingIdleThreads|Yes|Threads: Short parsing idle threads|Count|Average|Number of idle threads in the short parsing thread pool.|No Dimensions|
-|ShortParsingJobQueueLength|Yes|Threads: Short parsing job queue length|Count|Average|Number of jobs in the queue of the short parsing thread pool.|No Dimensions|
-|SuccessfullConnectionsPerSec|Yes|Successful Connections Per Sec|CountPerSecond|Average|Rate of successful connection completions.|No Dimensions|
-|TotalConnectionFailures|Yes|Total Connection Failures|Count|Average|Total failed connection attempts.|No Dimensions|
-|TotalConnectionRequests|Yes|Total Connection Requests|Count|Average|Total connection requests. These are arrivals.|No Dimensions|
-|VertiPaqNonpaged|Yes|Memory: VertiPaq Nonpaged|Bytes|Average|Bytes of memory locked in the working set for use by the in-memory engine.|No Dimensions|
-|VertiPaqPaged|Yes|Memory: VertiPaq Paged|Bytes|Average|Bytes of paged memory in use for in-memory data.|No Dimensions|
-|workload_memory_metric|Yes|Memory Per Workload (Gen1)|Bytes|Average|Memory Per Workload. Supported only for Power BI Embedded Generation 1 resources.|Workload|
-|workload_qpu_metric|Yes|QPU Per Workload (Gen1)|Count|Average|QPU Per Workload. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|Workload|
## microsoft.purview/accounts
This latest update adds a new column and reorders the metrics to be alphabetical
|ThrottledSearchQueriesPercentage|Yes|Throttled search queries percentage|Percent|Average|Percentage of search queries that were throttled for the search service|No Dimensions|
+## microsoft.securitydetonation/chambers
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|CapacityUtilization|No|Capacity Utilization|Percent|Maximum|The percentage of the allocated capacity the resource is actively using.|Region|
+|CpuUtilization|No|CPU Utilization|Percent|Average|The percentage of the CPU that is being utilized across the resource.|Region|
+|CreateSubmissionApiResult|No|CreateSubmission Api Results|Count|Count|The total number of CreateSubmission API requests, with return code.|OperationName, ServiceTypeName, Region, HttpReturnCode|
+|PercentFreeDiskSpace|No|Available Disk Space|Percent|Average|The percent amount of available disk space across the resource.|Region|
+|SubmissionDuration|No|Submission Duration|MilliSeconds|Maximum|The submission duration (processing time), from creation to completion.|Region|
+|SubmissionsCompleted|No|Completed Submissions / Hr|Count|Maximum|The number of completed submissions / Hr.|Region|
+|SubmissionsFailed|No|Failed Submissions / Hr|Count|Maximum|The number of failed submissions / Hr.|Region|
+|SubmissionsOutstanding|No|Outstanding Submissions|Count|Average|The average number of outstanding submissions that are queued for processing.|Region|
+|SubmissionsSucceeded|No|Successful Submissions / Hr|Count|Maximum|The number of successful submissions / Hr.|Region|
++ ## Microsoft.ServiceBus/Namespaces |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions| |MessageCount|Yes|Message Count|Count|Total|The total amount of messages.|No Dimensions| |OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions|
+|ServerLoad|No|Server Load|Percent|Maximum|SignalR server load.|No Dimensions|
|SystemErrors|Yes|System Errors|Percent|Maximum|The percentage of system errors|No Dimensions| |UserErrors|Yes|User Errors|Percent|Maximum|The percentage of user errors|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|ConnectionQuotaUtilization|Yes|Connection Quota Utilization|Percent|Maximum|The percentage of connection connected relative to connection quota.|No Dimensions| |InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The traffic originating from outside to inside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions| |OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The traffic originating from inside to outside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
+|ServerLoad|No|Server Load|Percent|Maximum|SignalR server load.|No Dimensions|
|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The number of user connections established to the service. It is aggregated by adding all the online connections.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|cpu_percent|Yes|CPU percentage|Percent|Average|CPU percentage|No Dimensions| |cpu_used|Yes|CPU used|Count|Average|CPU used. Applies to vCore-based databases.|No Dimensions| |deadlock|Yes|Deadlocks|Count|Total|Deadlocks. Not applicable to data warehouses.|No Dimensions|
+|delta_num_of_bytes_read|Yes|Remote data reads|Bytes|Total|Remote data reads in bytes|No Dimensions|
+|delta_num_of_bytes_total|Yes|Total remote bytes read and written|Bytes|Total|Total remote bytes read and written by compute|No Dimensions|
+|delta_num_of_bytes_written|Yes|Remote log writes|Bytes|Total|Remote log writes in bytes|No Dimensions|
|diff_backup_size_bytes|Yes|Differential backup storage size|Bytes|Maximum|Cumulative differential backup storage size. Applies to vCore-based databases. Not applicable to Hyperscale databases.|No Dimensions| |dtu_consumption_percent|Yes|DTU percentage|Percent|Average|DTU Percentage. Applies to DTU-based databases.|No Dimensions| |dtu_limit|Yes|DTU Limit|Count|Average|DTU Limit. Applies to DTU-based databases.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication| |Egress|Yes|Egress|Bytes|Total|The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication|
-|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
-|SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
+|SuccessE2ELatency|Yes|Success E2E Latency|MilliSeconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
+|SuccessServerLatency|Yes|Success Server Latency|MilliSeconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication, TransactionType|
|UsedCapacity|Yes|Used capacity|Bytes|Average|The amount of storage used by the storage account. For standard storage accounts, it's the sum of capacity used by blob, table, file, and queue. For premium storage accounts and Blob storage accounts, it is the same as BlobCapacity or FileCapacity.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ApiConnectionRequests|Yes|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
+|Requests|No|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
++
+## Microsoft.Web/containerapps
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|Replicas|Yes|Replica Count|Count|Maximum|Number of replicas count of container app|revisionName, deploymentName|
+|Requests|Yes|Requests|Count|Total|Requests processed|revisionName, podName, statusCodeCategory, statusCode|
+|RestartCount|Yes|Replica Restart Count|Count|Maximum|Restart count of container app replicas|revisionName, podName|
+|RxBytes|Yes|Network In Bytes|Bytes|Total|Network received bytes|revisionName, podName|
+|TxBytes|Yes|Network Out Bytes|Bytes|Total|Network transmitted bytes|revisionName, podName|
+|UsageNanoCores|Yes|CPU Usage Nanocores|NanoCores|Average|CPU consumed by the container app, in nano cores. 1,000,000,000 nano cores = 1 core|revisionName, podName|
+|WorkingSetBytes|Yes|Memory Working Set Bytes|Bytes|Average|Container App working set memory used in bytes.|revisionName, podName|
## Microsoft.Web/hostingEnvironments
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 04/12/2022 Last updated : 06/01/2022
If you think something is missing, you can open a GitHub comment at the bottom o
|PrivilegeUse|PrivilegeUse|No| |SystemSecurity|SystemSecurity|No| + ## microsoft.aadiam/tenants |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|AuditEvent message log category.|No|
-|AuditEvent|AuditEvent message log category.|No|
-|ERR|Error message log category.|No|
-|ERR|Error message log category.|No|
-|INF|Informational message log category.|No|
-|INF|Informational message log category.|No|
|NotProcessed|Requests which could not be processed.|Yes| |Operational|Operational message log category.|Yes|
-|WRN|Warning message log category.|Yes|
-|WRN|Warning message log category.|No|
## Microsoft.Automation/automationAccounts
If you think something is missing, you can open a GitHub comment at the bottom o
|CallDiagnostics|Call Diagnostics Logs|Yes| |CallSummary|Call Summary Logs|Yes| |ChatOperational|Operational Chat Logs|No|
+|EmailSendMailOperational|Email Service Send Mail Logs|Yes|
+|EmailStatusUpdateOperational|Email Service Delivery Status Update Logs|Yes|
+|EmailUserEngagementOperational|Email Service User Engagement Logs|Yes|
+|NetworkTraversalDiagnostics|Network Traversal Relay Diagnostic Logs|Yes|
|NetworkTraversalOperational|Operational Network Traversal Logs|Yes| |SMSOperational|Operational SMS Logs|No| |Usage|Usage Records|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|AgentHealthStatus|AgentHealthStatus|No|
|AgentHealthStatus|AgentHealthStatus|Yes| |Checkpoint|Checkpoint|Yes|
-|Checkpoint|Checkpoint|No|
-|Connection|Connection|No|
|Connection|Connection|Yes| |Error|Error|Yes|
-|Error|Error|No|
-|HostRegistration|HostRegistration|No|
|HostRegistration|HostRegistration|Yes| |Management|Management|Yes|
-|Management|Management|No|
|NetworkData|Network Data Logs|Yes| |SessionHostManagement|Session Host Management Activity Logs|Yes|
If you think something is missing, you can open a GitHub comment at the bottom o
|AzurePolicyEvaluationDetails|Azure Policy Evaluation Details|Yes|
-## Microsoft.Kusto/Clusters
+## Microsoft.Kusto/clusters
|Category|Category Display Name|Costs To Export| |||| |Command|Command|No|
-|FailedIngestion|Failed ingest operations|No|
+|FailedIngestion|Failed ingestion|No|
|IngestionBatching|Ingestion batching|No| |Journal|Journal|Yes| |Query|Query|No|
-|SucceededIngestion|Successful ingest operations|No|
+|SucceededIngestion|Succeeded ingestion|No|
|TableDetails|Table details|No| |TableUsageStatistics|Table usage statistics|No|
-## Microsoft.Logic/integrationAccounts
+## microsoft.loadtestservice/loadtests
|Category|Category Display Name|Costs To Export| ||||
-|IntegrationAccountTrackingEvents|Integration Account track events|No|
+|OperationLogs|Azure Load Testing Operations|Yes|
## Microsoft.Logic/IntegrationAccounts
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AmlComputeClusterEvent|AmlComputeClusterEvent|No|
-|AmlComputeClusterEvent|AmlComputeClusterEvent|No|
|AmlComputeClusterNodeEvent|AmlComputeClusterNodeEvent|No| |AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No|
-|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No|
|AmlComputeJobEvent|AmlComputeJobEvent|No|
-|AmlComputeJobEvent|AmlComputeJobEvent|No|
-|AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
|AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No| |ComputeInstanceEvent|ComputeInstanceEvent|Yes| |DataLabelChangeEvent|DataLabelChangeEvent|Yes|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|NSPInboundAccessAllowed|NSP Inbound Access Allowed.|Yes|
-|NSPInboundAccessDenied|NSP Inbound Access Denied.|Yes|
-|NSPOutboundAccessAllowed|NSP Outbound Access Allowed.|Yes|
-|NSPOutboundAccessDenied|NSP Outbound Access Denied.|Yes|
-|NSPOutboundAttempt|NSP Outbound Attempted.|Yes|
-|PrivateEndPointTraffic|Private Endpoint Traffic|Yes|
-|ResourceInboundAccessAllowed|Resource Inbound Access Allowed.|Yes|
-|ResourceInboundAccessDenied|Resource Inbound Access Denied|Yes|
-|ResourceOutboundAccessAllowed|Resource Outbound Access Allowed|Yes|
-|ResourceOutboundAccessDenied|Resource Outbound Access Denied|Yes|
+|NspIntraPerimeterInboundAllowed|Inbound access allowed within same perimeter.|Yes|
+|NspIntraPerimeterOutboundAllowed|Outbound attempted to same perimeter.|Yes|
+|NspPrivateInboundAllowed|Private endpoint traffic allowed.|Yes|
+|NspPublicInboundPerimeterRulesAllowed|Public inbound access allowed by NSP access rules.|Yes|
+|NspPublicInboundPerimeterRulesDenied|Public inbound access denied by NSP access rules.|Yes|
+|NspPublicInboundResourceRulesAllowed|Public inbound access allowed by PaaS resource rules.|Yes|
+|NspPublicInboundResourceRulesDenied|Public inbound access denied by PaaS resource rules.|Yes|
+|NspPublicOutboundPerimeterRulesAllowed|Public outbound access allowed by NSP access rules.|Yes|
+|NspPublicOutboundPerimeterRulesDenied|Public outbound access denied by NSP access rules.|Yes|
+|NspPublicOutboundResourceRulesAllowed|Public outbound access allowed by PaaS resource rules.|Yes|
+|NspPublicOutboundResourceRulesDenied|Public outbound access denied by PaaS resource rules|Yes|
## microsoft.network/p2svpngateways
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AirFlowTaskLogs|Air Flow Task Logs|Yes|
+|ElasticOperatorLogs|Elastic Operator Logs|Yes|
+|ElasticsearchLogs|Elasticsearch Logs|Yes|
## Microsoft.OpenLogisticsPlatform/Workspaces
If you think something is missing, you can open a GitHub comment at the bottom o
|OperationLogs|Operation Logs|No|
+## Microsoft.Security/antiMalwareSettings
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ScanResults|AntimalwareScanResults|Yes|
++ ## microsoft.securityinsights/settings |Category|Category Display Name|Costs To Export|
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-collector-release-notes.md
+
+ Title: Release Notes for Microsoft.ApplicationInsights.SnapshotCollector NuGet package - Application Insights
+description: Release notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package used by the Application Insights Snapshot Debugger.
+ Last updated : 11/10/2020+++
+# Release notes for Microsoft.ApplicationInsights.SnapshotCollector
+
+This article contains the releases notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package for .NET applications, which is used by the Application Insights Snapshot Debugger.
+
+[Learn](./snapshot-debugger.md) more about the Application Insights Snapshot Debugger for .NET applications.
+
+For bug reports and feedback, open an issue on GitHub at https://github.com/microsoft/ApplicationInsights-SnapshotCollector
++
+## Release notes
+
+## [1.4.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.3)
+A point release to address user-reported bugs.
+### Bug fixes
+- Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17)
+- Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19)
+<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](./snapshot-debugger-troubleshoot.md#not-supported-scenarios)
+
+## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2)
+A point release to address a user-reported bug.
+### Bug fixes
+- Fix [ArgumentException: Delegates must be of the same type.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/16)
+
+## [1.4.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.1)
+A point release to revert a breaking change introduced in 1.4.0.
+### Bug fixes
+- Fix [Method not found in WebJobs](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/15)
+
+## [1.4.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.0)
+Address multiple improvements and added support for Azure Active Directory (AAD) authentication for Application Insights ingestion.
+### Changes
+- Snapshot Collector package size reduced by 60%. From 10.34 MB to 4.11 MB.
+- Target netstandard2.0 only in Snapshot Collector.
+- Bump Application Insights SDK dependency to 2.15.0.
+- Add back MinidumpWithThreadInfo when writing dumps.
+- Add CompatibilityVersion to improve synchronization between Snapshot Collector agent and uploader on breaking changes.
+- Change SnapshotUploader LogFile naming algorithm to avoid excessive file I/O in App Service.
+- Add pid, role name, and process start time to uploaded blob metadata.
+- Use System.Diagnostics.Process where possible in Snapshot Collector and Snapshot Uploader.
+### New features
+- Add Azure Active Directory authentication to SnapshotCollector. Learn more about Azure AD authentication in Application Insights [here](../app/azure-ad-authentication.md).
+
+## [1.3.7.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.5)
+A point release to backport a fix from 1.4.0-pre.
+### Bug fixes
+- Fix [ObjectDisposedException on shutdown](https://github.com/microsoft/ApplicationInsights-dotnet/issues/2097).
+
+## [1.3.7.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.4)
+A point release to address a problem discovered in testing Azure App Service's codeless attach scenario.
+### Changes
+- The netcoreapp3.0 target now depends on Microsoft.ApplicationInsights.AspNetCore >= 2.1.1 (previously >= 2.1.2).
+
+## [1.3.7.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.3)
+A point release to address a couple of high-impact issues.
+### Bug fixes
+- Fixed PDB discovery in the wwwroot/bin folder, which was broken when we changed the symbol search algorithm in 1.3.6.
+- Fixed noisy ExtractWasCalledMultipleTimesException in telemetry.
+
+## [1.3.7](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7)
+### Changes
+- The netcoreapp2.0 target of SnapshotCollector depends on Microsoft.ApplicationInsights.AspNetCore >= 2.1.1 (again). This reverts behavior to how it was before 1.3.5. We tried to upgrade it in 1.3.6, but it broke some Azure App Service scenarios.
+### New features
+- Snapshot Collector reads and parses the ConnectionString from the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable or from the TelemetryConfiguration. Primarily, this is used to set the endpoint for connecting to the Snapshot service. For more information, see the [Connection strings documentation](../app/sdk-connection-string.md).
+### Bug fixes
+- Switched to using HttpClient for all targets except net45 because WebRequest was failing in some environments due to an incompatible SecurityProtocol (requires TLS 1.2).
+
+## [1.3.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.6)
+### Changes
+- SnapshotCollector now depends on Microsoft.ApplicationInsights >= 2.5.1 for all target frameworks. This may be a breaking change if your application depends on an older version of the Microsoft.ApplicationInsights SDK.
+- Remove support for TLS 1.0 and 1.1 in Snapshot Uploader.
+- Period of PDB scans now defaults 24 hours instead of 15 minutes. Configurable via PdbRescanInterval on SnapshotCollectorConfiguration.
+- PDB scan searches top-level folders only, instead of recursive. This may be a breaking change if your symbols are in subfolders of the binary folder.
+### New features
+- Log rotation in SnapshotUploader to avoid filling the logs folder with old files.
+- Deoptimization support (via ReJIT on attach) for .NET Core 3.0 applications.
+- Add symbols to NuGet package.
+- Set additional metadata when uploading minidumps.
+- Added an Initialized property to SnapshotCollectorTelemetryProcessor. It's a CancellationToken, which will be canceled when the Snapshot Collector is completely initialized and connected to the service endpoint.
+- Snapshots can now be captured for exceptions in dynamically generated methods. For example, the compiled expression trees generated by Entity Framework queries.
+### Bug fixes
+- AmbiguousMatchException loading Snapshot Collector due to Status Monitor.
+- GetSnapshotCollector extension method now searches all TelemetrySinks.
+- Don't start the Snapshot Uploader on unsupported platforms.
+- Handle InvalidOperationException when deoptimizing dynamic methods (for example, Entity Framework)
+
+## [1.3.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.5)
+- Add support for Sovereign clouds (Older versions won't work in sovereign clouds)
+- Adding snapshot collector made easier by using AddSnapshotCollector(). More information can be found [here](./snapshot-debugger-app-service.md).
+- Use FISMA MD5 setting for verifying blob blocks. This avoids the default .NET MD5 crypto algorithm, which is unavailable when the OS is set to FIPS-compliant mode.
+- Ignore .NET Framework frames when deoptimizing function calls. This behavior can be controlled by the DeoptimizeIgnoredModules configuration setting.
+- Add `DeoptimizeMethodCount` configuration setting that allows deoptimization of more than one function call. More information here
+
+## [1.3.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.4)
+- Allow structured Instrumentation Keys.
+- Increase SnapshotUploader robustness - continue startup even if old uploader logs can't be moved.
+- Re-enabled reporting additional telemetry when SnapshotUploader.exe exits immediately (was disabled in 1.3.3).
+- Simplify internal telemetry.
+- _Experimental feature_: Snappoint collection plans: Add "snapshotOnFirstOccurence". More information available [here](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).
+
+## [1.3.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.3)
+- Fixed bug that was causing SnapshotUploader.exe to stop responding and not upload snapshots for .NET Core apps.
+
+## [1.3.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.2)
+- _Experimental feature_: Snappoint collection plans. More information available [here](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).
+- SnapshotUploader.exe will exit when the runtime unloads the AppDomain from which SnapshotCollector is loaded, instead of waiting for the process to exit. This improves the collector reliability when hosted in IIS.
+- Add configuration to allow multiple SnapshotCollector instances that are using the same Instrumentation Key to share the same SnapshotUploader process: ShareUploaderProcess (defaults to `true`).
+- Report additional telemetry when SnapshotUploader.exe exits immediately.
+- Reduced the number of support files SnapshotUploader.exe needs to write to disk.
+
+## [1.3.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.1)
+- Remove support for collecting snapshots with the RtlCloneUserProcess API and only support PssCaptureSnapshots API.
+- Increase the default limit on how many snapshot can be captured in 10 minutes from 1 to 3.
+- Allow SnapshotUploader.exe to negotiate TLS 1.1 and 1.2
+- Report additional telemetry when SnapshotUploader logs a warning or an error
+- Stop taking snapshots when the backend service reports the daily quota was reached (50 snapshots per day)
+- Add extra check in SnapshotUploader.exe to not allow two instances to run in the same time.
+
+## [1.3.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.0)
+### Changes
+- For applications targeting .NET Framework, Snapshot Collector now depends on Microsoft.ApplicationInsights version 2.3.0 or above.
+It used to be 2.2.0 or above.
+We believe this won't be an issue for most applications, but let us know if this change prevents you from using the latest Snapshot Collector.
+- Use exponential back-off delays in the Snapshot Uploader when retrying failed uploads.
+- Use ServerTelemetryChannel (if available) for more reliable reporting of telemetry.
+- Use 'SdkInternalOperationsMonitor' on the initial connection to the Snapshot Debugger service so that it's ignored by dependency tracking.
+- Improve telemetry around initial connection to the Snapshot Debugger service.
+- Report additional telemetry for:
+ - Azure App Service version.
+ - Azure compute instances.
+ - Containers.
+ - Azure Function app.
+### Bug fixes
+- When the problem counter reset interval is set to 24 days, interpret that as 24 hours.
+- Fixed a bug where the Snapshot Uploader would stop processing new snapshots if there was an exception while disposing a snapshot.
+
+## [1.2.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.3)
+- Fix strong-name signing with Snapshot Uploader binaries.
+
+## [1.2.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.2)
+### Changes
+- The files needed for SnapshotUploader(64).exe are now embedded as resources in the main DLL. That means the SnapshotCollectorFiles folder is no longer created, simplifying build and deployment and reducing clutter in Solution Explorer. Take care when upgrading to review the changes in your `.csproj` file. The `Microsoft.ApplicationInsights.SnapshotCollector.targets` file is no longer needed.
+- Telemetry is logged to your Application Insights resource even if ProvideAnonymousTelemetry is set to false. This is so we can implement a health check feature in the Azure portal. ProvideAnonymousTelemetry affects only the telemetry sent to Microsoft for product support and improvement.
+- When the TempFolder or ShadowCopyFolder are redirected to environment variables, keep the collector idle until those environment variables are set.
+- For applications that connect to the Internet via a proxy server, Snapshot Collector will now autodetect any proxy settings and pass them on to SnapshotUploader.exe.
+- Lower the priority of the SnapshotUplaoder process (where possible). This priority can be overridden via the IsLowPrioirtySnapshotUploader option.
+- Added a GetSnapshotCollector extension method on TelemetryConfiguration for scenarios where you want to configure the Snapshot Collector programmatically.
+- Set the Application Insights SDK version (instead of the application version) in customer-facing telemetry.
+- Send the first heartbeat event after two minutes.
+### Bug fixes
+- Fix NullReferenceException when exceptions have null or immutable Data dictionaries.
+- In the uploader, retry PDB matching a few times if we get a sharing violation.
+- Fix duplicate telemetry when more than one thread calls into the telemetry pipeline at startup.
+
+## [1.2.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.1)
+### Changes
+- XML Doc comment files are now included in the NuGet package.
+- Added an ExcludeFromSnapshotting extension method on `System.Exception` for scenarios where you know you have a noisy exception and want to avoid creating snapshots for it.
+- Added an IsEnabledWhenProfiling configuration property, defaults to true. This is a change from previous versions where snapshot creation was temporarily disabled if the Application Insights Profiler was performing a detailed collection. The old behavior can be recovered by setting this property to false.
+### Bug fixes
+- Sign SnapshotUploader64.exe properly.
+- Protect against double-initialization of the telemetry processor.
+- Prevent double logging of telemetry in apps with multiple pipelines.
+- Fix a bug with the expiration time of a collection plan, which could prevent snapshots after 24 hours.
+
+## [1.2.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.0)
+The biggest change in this version (hence the move to a new minor version number) is a rewrite of the snapshot creation and handling pipeline. In previous versions, this functionality was implemented in native code (ProductionBreakpoints*.dll and SnapshotHolder*.exe). The new implementation is all managed code with P/Invokes. For this first version using the new pipeline, we haven't strayed far from the original behavior. The new implementation allows for better error reporting and sets us up for future improvements.
+
+### Other changes in this version
+- MinidumpUploader.exe has been renamed to SnapshotUploader.exe (or SnapshotUploader64.exe).
+- Added timing telemetry to DeOptimize/ReOptimize requests.
+- Added gzip compression for minidump uploads.
+- Fixed a problem where PDBs were locked preventing site upgrade.
+- Log the original folder name (SnapshotCollectorFiles) when shadow-copying.
+- Adjust memory limits for 64-bit processes to prevent site restarts due to OOM.
+- Fix an issue where snapshots were still collected even after disabling.
+- Log heartbeat events to customer's AI resource.
+- Improve snapshot speed by removing "Source" from Problem ID.
+
+## [1.1.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.2)
+### Changes
+Augmented usage telemetry
+- Detect and report .NET version and OS
+- Detect and report additional Azure Environments (Cloud Service, Service Fabric)
+- Record and report exception metrics (number of 1st chance exceptions and number of TrackException calls) in Heartbeat telemetry.
+### Bug fixes
+- Correct handling of SqlException where the inner exception (Win32Exception) isn't thrown.
+- Trim trailing spaces on symbol folders, which caused an incorrect parse of command-line arguments to the MinidumpUploader.
+- Prevent infinite retry of failed connections to the Snapshot Debugger agent's endpoint.
+
+## [1.1.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.0)
+### Changes
+- Added host memory protection. This feature reduces the impact on the host machine's memory.
+- Improve the Azure portal snapshot viewing experience.
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
+
+ Title: Enable Snapshot Debugger for .NET apps in Azure App Service | Microsoft Docs
+description: Enable Snapshot Debugger for .NET apps in Azure App Service
+ Last updated : 03/26/2019++
+# Enable Snapshot Debugger for .NET apps in Azure App Service
+
+Snapshot Debugger currently supports ASP.NET and ASP.NET Core apps that are running on Azure App Service on Windows service plans.
+
+We recommend you run your application on the Basic service tier, or higher, when using snapshot debugger.
+
+For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots.
+
+## <a id="installation"></a> Enable Snapshot Debugger
+To enable Snapshot Debugger for an app, follow the instructions below.
+
+If you're running a different type of Azure service, here are instructions for enabling Snapshot Debugger on other supported platforms:
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+
+> [!NOTE]
+> If you're using a preview version of .NET Core, or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) to include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package with the application, and then complete the rest of the instructions below.
+>
+> Codeless installation of Application Insights Snapshot Debugger follows the .NET Core support policy.
+> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+
+Snapshot Debugger is pre-installed as part of the App Services runtime, but you need to turn it on to get snapshots for your App Service app.
+
+Once you've deployed an app, follow the steps below to enable the snapshot debugger:
+
+1. Navigate to the Azure control panel for your App Service.
+2. Go to the **Settings > Application Insights** page.
+
+ ![Enable App Insights on App Services portal](./media/snapshot-debugger/application-insights-app-services.png)
+
+3. Either follow the instructions on the page to create a new resource or select an existing App Insights resource to monitor your app. Also make sure both switches for Snapshot Debugger are **On**.
+
+ ![Add App Insights site extension][Enablement UI]
+
+4. Snapshot Debugger is now enabled using an App Services App Setting.
+
+ ![App Setting for Snapshot Debugger][snapshot-debugger-app-setting]
+
+## Enable Snapshot Debugger for other clouds
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide) through the Application Insights Connection String.
+
+|Connection String Property | US Government Cloud | China Cloud |
+|||-|
+|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+For more information about other connection overrides, see [Application Insights documentation](../app/sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
+
+## Enable Azure Active Directory authentication for snapshot ingestion
+
+Application Insights Snapshot Debugger supports Azure AD authentication for snapshot ingestion. This means, for all snapshots of your application to be ingested, your application must be authenticated and provide the required application settings to the Snapshot Debugger agent.
+
+As of today, Snapshot Debugger only supports Azure AD authentication when you reference and configure Azure AD using the Application Insights SDK in your application.
+
+Below you can find all the steps required to enable Azure AD for profiles ingestion:
+1. Create and add the managed identity you want to use to authenticate against your Application Insights resource to your App Service.
+
+ a. For System-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity)
+
+ b. For User-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
+
+2. Configure and enable Azure AD in your Application Insights resource. For more information, see the following [documentation](../app/azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication)
+3. Add the following application setting, used to let Snapshot Debugger agent know which managed identity to use:
+
+For System-Assigned Identity:
+
+|App Setting | Value |
+||-|
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD |
+
+For User-Assigned Identity:
+
+|App Setting | Value |
+||-|
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD;ClientId={Client id of the User-Assigned Identity} |
+
+## Disable Snapshot Debugger
+
+Follow the same steps as for **Enable Snapshot Debugger**, but switch both switches for Snapshot Debugger to **Off**.
+
+We recommend you have Snapshot Debugger enabled on all your apps to ease diagnostics of application exceptions.
+
+## Azure Resource Manager template
+
+For an Azure App Service, you can set app settings within the Azure Resource Manager template to enable Snapshot Debugger and Profiler, see the below template snippet:
+
+```json
+{
+ "apiVersion": "2015-08-01",
+ "name": "[parameters('webSiteName')]",
+ "type": "Microsoft.Web/sites",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[variables('hostingPlanName')]"
+ ],
+ "tags": {
+ "[concat('hidden-related:', resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName')))]": "empty",
+ "displayName": "Website"
+ },
+ "properties": {
+ "name": "[parameters('webSiteName')]",
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
+ },
+ "resources": [
+ {
+ "apiVersion": "2015-08-01",
+ "name": "appsettings",
+ "type": "config",
+ "dependsOn": [
+ "[parameters('webSiteName')]",
+ "[concat('AppInsights', parameters('webSiteName'))]"
+ ],
+ "properties": {
+ "APPINSIGHTS_INSTRUMENTATIONKEY": "[reference(resourceId('Microsoft.Insights/components', concat('AppInsights', parameters('webSiteName'))), '2014-04-01').InstrumentationKey]",
+ "APPINSIGHTS_PROFILERFEATURE_VERSION": "1.0.0",
+ "APPINSIGHTS_SNAPSHOTFEATURE_VERSION": "1.0.0",
+ "DiagnosticServices_EXTENSION_VERSION": "~3",
+ "ApplicationInsightsAgent_EXTENSION_VERSION": "~2"
+ }
+ }
+ ]
+},
+```
+
+## Not Supported Scenarios
+Below you can find scenarios where Snapshot Collector is not supported:
+
+|Scenario | Side Effects | Recommendation |
+||--|-|
+|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+
+## Next steps
+
+- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+- See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+
+[Enablement UI]: ./media/snapshot-debugger/enablement-ui.png
+[snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-function-app.md
+
+ Title: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions | Microsoft Docs
+description: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions
+ Last updated : 12/18/2020+++
+# Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions
+
+Snapshot Debugger currently works for ASP.NET and ASP.NET Core apps that are running on Azure Functions on Windows Service Plans.
+
+We recommend you run your application on the Basic service tier or higher when using Snapshot Debugger.
+
+For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots.
+
+## Prerequisites
+
+* [Enable Application Insights monitoring in your Function App](../../azure-functions/configure-monitoring.md#add-to-an-existing-function-app)
+
+## Enable Snapshot Debugger
+
+If you're running a different type of Azure service, here are instructions for enabling Snapshot Debugger on other supported platforms:
+* [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+
+To enable Snapshot Debugger in your Function app, you have to update your `host.json` file by adding the property `snapshotConfiguration` as defined below and redeploy your function.
+
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "snapshotConfiguration": {
+ "isEnabled": true
+ }
+ }
+ }
+}
+```
+
+Snapshot Debugger is pre-installed as part of the Azure Functions runtime, which by default it's disabled.
+
+Since Snapshot Debugger it's included in the Azure Functions runtime, it isn't needed to add extra NuGet packages nor application settings.
+
+Just as reference, for a simple Function app (.NET Core), below is how it will look the `.csproj`, `{Your}Function.cs`, and `host.json` after enabled Snapshot Debugger on it.
+
+Project csproj
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+<PropertyGroup>
+ <TargetFramework>netcoreapp2.1</TargetFramework>
+ <AzureFunctionsVersion>v2</AzureFunctionsVersion>
+</PropertyGroup>
+<ItemGroup>
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.31" />
+</ItemGroup>
+<ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+</ItemGroup>
+</Project>
+```
+
+Function class
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.AspNetCore.Http;
+using Microsoft.Extensions.Logging;
+
+namespace SnapshotCollectorAzureFunction
+{
+ public static class ExceptionFunction
+ {
+ [FunctionName("ExceptionFunction")]
+ public static Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req,
+ ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ throw new NotImplementedException("Dummy");
+ }
+ }
+}
+```
+
+Host file
+
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true
+ }
+ }
+ }
+}
+```
+
+## Enable Snapshot Debugger for other clouds
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+
+Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true,
+ "agentEndpoint": "https://snapshot.monitor.azure.us"
+ }
+ }
+ }
+}
+```
+
+Below are the supported overrides of the Snapshot Debugger agent endpoint:
+
+|Property | US Government Cloud | China Cloud |
+|||-|
+|AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+## Disable Snapshot Debugger
+
+To disable Snapshot Debugger in your Function app, you just need to update your `host.json` file by setting to `false` the property `snapshotConfiguration.isEnabled`.
+
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "snapshotConfiguration": {
+ "isEnabled": false
+ }
+ }
+ }
+}
+```
+
+We recommend you have Snapshot Debugger enabled on all your apps to ease diagnostics of application exceptions.
+
+## Next steps
+
+- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+- [View snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+- Customize Snapshot Debugger configuration based on your use-case on your Function app. For more info, see [snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
+
+ Title: Troubleshoot Azure Application Insights Snapshot Debugger
+description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Snapshot Debugger.
+ Last updated : 03/07/2019+++
+# <a id="troubleshooting"></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots
+If you enabled Application Insights Snapshot Debugger for your application, but aren't seeing snapshots for exceptions, you can use these instructions to troubleshoot.
+
+There can be many different reasons why snapshots aren't generated. You can start by running the snapshot health check to identify some of the possible common causes.
+
+## Not Supported Scenarios
+Below you can find scenarios where Snapshot Collector is not supported:
+
+|Scenario | Side Effects | Recommendation |
+||--|-|
+|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+
+## Make sure you're using the appropriate Snapshot Debugger Endpoint
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+
+For App Service and applications using the Application Insights SDK, you have to update the connection string using the supported overrides for Snapshot Debugger as defined below:
+
+|Connection String Property | US Government Cloud | China Cloud |
+|||-|
+|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+For more information about other connection overrides, see [Application Insights documentation](../app/sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
+
+For Function App, you have to update the `host.json` using the supported overrides below:
+
+|Property | US Government Cloud | China Cloud |
+|||-|
+|AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true,
+ "agentEndpoint": "https://snapshot.monitor.azure.us"
+ }
+ }
+ }
+}
+```
+
+## Use the snapshot health check
+Several common problems result in the Open Debug Snapshot not showing up. Using an outdated Snapshot Collector, for example; reaching the daily upload limit; or perhaps the snapshot is just taking a long time to upload. Use the Snapshot Health Check to troubleshoot common problems.
+
+There's a link in the exception pane of the end-to-end trace view that takes you to the Snapshot Health Check.
+
+![Enter snapshot health check](./media/snapshot-debugger/enter-snapshot-health-check.png)
+
+The interactive, chat-like interface looks for common problems and guides you to fix them.
+
+![Health Check](./media/snapshot-debugger/health-check.png)
+
+If that doesn't solve the problem, then refer to the following manual troubleshooting steps.
+
+## Verify the instrumentation key
+
+Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the ApplicationInsights.config file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal.
++
+## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
+
+If you have an ASP.NET application that it is hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
+
+[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md?toc=/azure/azure-monitor/toc.json). The set of SSL security protocols is one of the quirks enabled by the httpRuntime targetFramework value in the system.web section of web.config.
+If the httpRuntime targetFramework is 4.5.2 or lower, then TLS 1.2 isn't included by default.
+
+> [!NOTE]
+> The httpRuntime targetFramework value is independent of the target framework used when building your application.
+
+To check the setting, open your web.config file and find the system.web section. Ensure that the `targetFramework` for `httpRuntime` is set to 4.6 or above.
+
+ ```xml
+ <system.web>
+ ...
+ <httpRuntime targetFramework="4.7.2" />
+ ...
+ </system.web>
+ ```
+
+> [!NOTE]
+> Modifying the httpRuntime targetFramework value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Retargeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes).
+
+> [!NOTE]
+> If the targetFramework is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you are using your own virtual machine, you may need to enable TLS 1.2 in the OS.
+
+## Preview Versions of .NET Core
+If you're using a preview version of .NET Core or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json).
+
+## Check the Diagnostic Services site extension' Status Page
+If Snapshot Debugger was enabled through the [Application Insights pane](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json) in the portal, it was enabled by the Diagnostic Services site extension.
+
+> [!NOTE]
+> Codeless installation of Application Insights Snapshot Debugger follows the .NET Core support policy.
+> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+
+You can check the Status Page of this extension by going to the following url:
+`https://{site-name}.scm.azurewebsites.net/DiagnosticServices`
+
+> [!NOTE]
+> The domain of the Status Page link will vary depending on the cloud.
+This domain will be the same as the Kudu management site for App Service.
+
+This Status Page shows the installation state of the Profiler and Snapshot Collector agents. If there was an unexpected error, it will be displayed and show how to fix it.
+
+You can use the Kudu management site for App Service to get the base url of this Status Page:
+1. Open your App Service application in the Azure portal.
+2. Select **Advanced Tools**, or search for **Kudu**.
+3. Select **Go**.
+4. Once you are on the Kudu management site, in the URL, **append the following `/DiagnosticServices` and press enter**.
+ It will end like this: `https://<kudu-url>/DiagnosticServices`
+
+## Upgrade to the latest version of the NuGet package
+Based on how Snapshot Debugger was enabled, see the following options:
+
+* If Snapshot Debugger was enabled through the [Application Insights pane in the portal](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json), then your application should already be running the latest NuGet package.
+
+* If Snapshot Debugger was enabled by including the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package, use Visual Studio's NuGet Package Manager to make sure you're using the latest version of Microsoft.ApplicationInsights.SnapshotCollector.
+
+For the latest updates and bug fixes [consult the release notes](./snapshot-collector-release-notes.md).
+
+## Check the uploader logs
+
+After a snapshot is created, a minidump file (.dmp) is created on disk. A separate uploader process creates that minidump file and uploads it, along with any associated PDBs, to Application Insights Snapshot Debugger storage. After the minidump has uploaded successfully, it's deleted from disk. The log files for the uploader process are kept on disk. In an App Service environment, you can find these logs in `D:\Home\LogFiles`. Use the Kudu management site for App Service to find these log files.
+
+1. Open your App Service application in the Azure portal.
+2. Select **Advanced Tools**, or search for **Kudu**.
+3. Select **Go**.
+4. In the **Debug console** drop-down list box, select **CMD**.
+5. Select **LogFiles**.
+
+You should see at least one file with a name that begins with `Uploader_` or `SnapshotUploader_` and a `.log` extension. Select the appropriate icon to download any log files or open them in a browser.
+The file name includes a unique suffix that identifies the App Service instance. If your App Service instance is hosted on more than one machine, there are separate log files for each machine. When the uploader detects a new minidump file, it's recorded in the log file. Here's an example of a successful snapshot and upload:
+
+```
+SnapshotUploader.exe Information: 0 : Received Fork request ID 139e411a23934dc0b9ea08a626db16c5 from process 6368 (Low pri)
+ DateTime=2018-03-09T01:42:41.8571711Z
+SnapshotUploader.exe Information: 0 : Creating minidump from Fork request ID 139e411a23934dc0b9ea08a626db16c5 from process 6368 (Low pri)
+ DateTime=2018-03-09T01:42:41.8571711Z
+SnapshotUploader.exe Information: 0 : Dump placeholder file created: 139e411a23934dc0b9ea08a626db16c5.dm_
+ DateTime=2018-03-09T01:42:41.8728496Z
+SnapshotUploader.exe Information: 0 : Dump available 139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:45.7525022Z
+SnapshotUploader.exe Information: 0 : Successfully wrote minidump to D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:45.7681360Z
+SnapshotUploader.exe Information: 0 : Uploading D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp, 214.42 MB (uncompressed)
+ DateTime=2018-03-09T01:42:45.7681360Z
+SnapshotUploader.exe Information: 0 : Upload successful. Compressed size 86.56 MB
+ DateTime=2018-03-09T01:42:59.6184651Z
+SnapshotUploader.exe Information: 0 : Extracting PDB info from D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp.
+ DateTime=2018-03-09T01:42:59.6184651Z
+SnapshotUploader.exe Information: 0 : Matched 2 PDB(s) with local files.
+ DateTime=2018-03-09T01:42:59.6809606Z
+SnapshotUploader.exe Information: 0 : Stamp does not want any of our matched PDBs.
+ DateTime=2018-03-09T01:42:59.8059929Z
+SnapshotUploader.exe Information: 0 : Deleted D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:59.8530649Z
+```
+
+> [!NOTE]
+> The example above is from version 1.2.0 of the Microsoft.ApplicationInsights.SnapshotCollector NuGet package. In earlier versions, the uploader process is called `MinidumpUploader.exe` and the log is less detailed.
+
+In the previous example, the instrumentation key is `c12a605e73c44346a984e00000000000`. This value should match the instrumentation key for your application.
+The minidump is associated with a snapshot with the ID `139e411a23934dc0b9ea08a626db16c5`. You can use this ID later to locate the associated exception record in Application Insights Analytics.
+
+The uploader scans for new PDBs about once every 15 minutes. Here's an example:
+
+```
+SnapshotUploader.exe Information: 0 : PDB rescan requested.
+ DateTime=2018-03-09T01:47:19.4457768Z
+SnapshotUploader.exe Information: 0 : Scanning D:\home\site\wwwroot for local PDBs.
+ DateTime=2018-03-09T01:47:19.4457768Z
+SnapshotUploader.exe Information: 0 : Local PDB scan complete. Found 2 PDB(s).
+ DateTime=2018-03-09T01:47:19.4614027Z
+SnapshotUploader.exe Information: 0 : Deleted PDB scan marker : D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\6368.pdbscan
+ DateTime=2018-03-09T01:47:19.4614027Z
+```
+
+For applications that _aren't_ hosted in App Service, the uploader logs are in the same folder as the minidumps: `%TEMP%\Dumps\<ikey>` (where `<ikey>` is your instrumentation key).
+
+## Troubleshooting Cloud Services
+In Cloud Services, the default temporary folder could be too small to hold the minidump files, leading to lost snapshots.
+
+The space needed depends on the total working set of your application and the number of concurrent snapshots.
+
+The working set of a 32-bit ASP.NET web role is typically between 200 MB and 500 MB. Allow for at least two concurrent snapshots.
+
+For example, if your application uses 1 GB of total working set, you should make sure there is at least 2 GB of disk space to store snapshots.
+
+Follow these steps to configure your Cloud Service role with a dedicated local resource for snapshots.
+
+1. Add a new local resource to your Cloud Service by editing the Cloud Service definition (.csdef) file. The following example defines a resource called `SnapshotStore` with a size of 5 GB.
+ ```xml
+ <LocalResources>
+ <LocalStorage name="SnapshotStore" cleanOnRoleRecycle="false" sizeInMB="5120" />
+ </LocalResources>
+ ```
+
+2. Modify your role's startup code to add an environment variable that points to the `SnapshotStore` local resource. For Worker Roles, the code should be added to your role's `OnStart` method:
+ ```csharp
+ public override bool OnStart()
+ {
+ Environment.SetEnvironmentVariable("SNAPSHOTSTORE", RoleEnvironment.GetLocalResource("SnapshotStore").RootPath);
+ return base.OnStart();
+ }
+ ```
+ For Web Roles (ASP.NET), the code should be added to your web application's `Application_Start` method:
+ ```csharp
+ using Microsoft.WindowsAzure.ServiceRuntime;
+ using System;
+
+ namespace MyWebRoleApp
+ {
+ public class MyMvcApplication : System.Web.HttpApplication
+ {
+ protected void Application_Start()
+ {
+ Environment.SetEnvironmentVariable("SNAPSHOTSTORE", RoleEnvironment.GetLocalResource("SnapshotStore").RootPath);
+ // TODO: The rest of your application startup code
+ }
+ }
+ }
+ ```
+
+3. Update your role's ApplicationInsights.config file to override the temporary folder location used by `SnapshotCollector`
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- Use the SnapshotStore local resource for snapshots -->
+ <TempFolder>%SNAPSHOTSTORE%</TempFolder>
+ <!-- Other SnapshotCollector configuration options -->
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+## Overriding the Shadow Copy folder
+
+When the Snapshot Collector starts up, it tries to find a folder on disk that is suitable for running the Snapshot Uploader process. The chosen folder is known as the Shadow Copy folder.
+
+The Snapshot Collector checks a few well-known locations, making sure it has permissions to copy the Snapshot Uploader binaries. The following environment variables are used:
+- Fabric_Folder_App_Temp
+- LOCALAPPDATA
+- APPDATA
+- TEMP
+
+If a suitable folder can't be found, Snapshot Collector reports an error saying _"Couldn't find a suitable shadow copy folder."_
+
+If the copy fails, Snapshot Collector reports a `ShadowCopyFailed` error.
+
+If the uploader can't be launched, Snapshot Collector reports an `UploaderCannotStartFromShadowCopy` error. The body of the message often contains `System.UnauthorizedAccessException`. This error usually occurs because the application is running under an account with reduced permissions. The account has permission to write to the shadow copy folder, but it doesn't have permission to execute code.
+
+Since these errors usually happen during startup, they'll usually be followed by an `ExceptionDuringConnect` error saying _"Uploader failed to start."_
+
+To work around these errors, you can specify the shadow copy folder manually via the `ShadowCopyFolder` configuration option. For example, using ApplicationInsights.config:
+
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- Override the default shadow copy folder. -->
+ <ShadowCopyFolder>D:\SnapshotUploader</ShadowCopyFolder>
+ <!-- Other SnapshotCollector configuration options -->
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+Or, if you're using appsettings.json with a .NET Core application:
+
+ ```json
+ {
+ "ApplicationInsights": {
+ "InstrumentationKey": "<your instrumentation key>"
+ },
+ "SnapshotCollectorConfiguration": {
+ "ShadowCopyFolder": "D:\\SnapshotUploader"
+ }
+ }
+ ```
+
+## Use Application Insights search to find exceptions with snapshots
+
+When a snapshot is created, the throwing exception is tagged with a snapshot ID. That snapshot ID is included as a custom property when the exception is reported to Application Insights. Using **Search** in Application Insights, you can find all records with the `ai.snapshot.id` custom property.
+
+1. Browse to your Application Insights resource in the Azure portal.
+2. Select **Search**.
+3. Type `ai.snapshot.id` in the Search text box and press Enter.
+
+![Search for telemetry with a snapshot ID in the portal](./media/snapshot-debugger/search-snapshot-portal.png)
+
+If this search returns no results, then, no snapshots were reported to Application Insights in the selected time range.
+
+To search for a specific snapshot ID from the Uploader logs, type that ID in the Search box. If you can't find records for a snapshot that you know was uploaded, follow these steps:
+
+1. Double-check that you're looking at the right Application Insights resource by verifying the instrumentation key.
+
+2. Using the timestamp from the Uploader log, adjust the Time Range filter of the search to cover that time range.
+
+If you still don't see an exception with that snapshot ID, then the exception record wasn't reported to Application Insights. This situation can happen if your application crashed after it took the snapshot but before it reported the exception record. In this case, check the App Service logs under `Diagnose and solve problems` to see if there were unexpected restarts or unhandled exceptions.
+
+## Edit network proxy or firewall rules
+
+If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Snapshot Debugger service.
+
+The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-upgrade.md
+
+ Title: Upgrading Azure Application Insights Snapshot Debugger
+description: How to upgrade Snapshot Debugger for .NET apps to the latest version on Azure App Services, or via Nuget packages
+ Last updated : 03/28/2019+++
+# Upgrading the Snapshot Debugger
+
+To provide the best possible security for your data, Microsoft is moving away from TLS 1.0 and TLS 1.1, which have been shown to be vulnerable to determined attackers. If you're using an older version of the site extension, it will require an upgrade to continue working. This document outlines the steps needed to upgrade your Snapshot debugger to the latest version.
+There are two primary upgrade paths depending on if you enabled the Snapshot Debugger using a site extension or if you used an SDK/Nuget added to your application. Both upgrade paths are discussed below.
+
+## Upgrading the site extension
+
+> [!IMPORTANT]
+> Older versions of Application Insights used a private site extension called _Application Insights extension for Azure App Service_. The current Application Insights experience is enabled by setting App Settings to light up a pre-installed site extension.
+> To avoid conflicts, which may cause your site to stop working, it is important to delete the private site extension first. See step 4 below.
+
+If you enabled the Snapshot debugger using the site extension, you can upgrade using the following procedure:
+
+1. Sign in to the Azure portal.
+2. Navigate to your resource that has Application Insights and Snapshot debugger enabled. For example, for a Web App, navigate to the App Service resource:
+
+ ![Screenshot of individual App Service resource named DiagService01](./media/snapshot-debugger-upgrade/app-service-resource.png)
+
+3. Once you've navigated to your resource, click on the Extensions blade and wait for the list of extensions to populate:
+
+ ![Screenshot of App Service Extensions showing Application Insights extension for Azure App Service installed](./media/snapshot-debugger-upgrade/application-insights-site-extension-to-be-deleted.png)
+
+4. If any version of _Application Insights extension for Azure App Service_ is installed, then select it and click Delete. Confirm **Yes** to delete the extension and wait for the delete to complete before moving to the next step.
+
+ ![Screenshot of App Service Extensions showing Application Insights extension for Azure App Service with the Delete button highlighted](./media/snapshot-debugger-upgrade/application-insights-site-extension-delete.png)
+
+5. Go to the Overview blade of your resource and click on Application Insights:
+
+ ![Screenshot of three buttons. Center button with name Application Insights is selected](./media/snapshot-debugger-upgrade/application-insights-button.png)
+
+6. If this is the first time you've viewed the Application Insights blade for this App Service, you'll be prompted to turn on Application Insights. Select **Turn on Application Insights**.
+
+ ![Screenshot of the first-time experience for the Application Insights blade with the Turn on Application Insights button highlighted](./media/snapshot-debugger-upgrade/turn-on-application-insights.png)
+
+7. The current Application Insights settings are displayed. Unless you want to take the opportunity to change your settings, you can leave them as is. The **Apply** button on the bottom of the blade isn't enabled by default and you'll have to toggle one of the settings to activate the button. You donΓÇÖt have to change any actual settings, rather you can change the setting and then immediately change it back. We recommend toggling the Profiler setting and then selecting **Apply**.
+
+ ![Screenshot of Application Insights App Service Configuration page with Apply button highlighted in red](./media/snapshot-debugger-upgrade/view-application-insights-data.png)
+
+8. Once you click **Apply**, you'll be asked to confirm the changes.
+
+ > [!NOTE]
+ > The site will be restarted as part of the upgrade process.
+
+ ![Screenshot of App Service's apply monitoring prompt. Text box displays message: "We will now apply changes to your app settings and install our tools to link your Application Insights resource to the web app. This will restart the site. Do you want to continue?"](./media/snapshot-debugger-upgrade/apply-monitoring-settings.png)
+
+9. Click **Yes** to apply the changes and wait for the process to complete.
+
+The site has now been upgraded and is ready to use.
+
+## Upgrading Snapshot Debugger using SDK/Nuget
+
+If the application is using a version of `Microsoft.ApplicationInsights.SnapshotCollector` below version 1.3.1, it will need to be upgraded to a [newer version](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) to continue working.
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
+
+ Title: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines | Microsoft Docs
+description: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines
+ Last updated : 03/07/2019+++
+# Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines
+
+If your ASP.NET or ASP.NET core application runs in Azure App Service, it's highly recommended to [enable Snapshot Debugger through the Application Insights portal page](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json). However, if your application requires a customized Snapshot Debugger configuration, or a preview version of .NET core, then this instruction should be followed ***in addition*** to the instructions for [enabling through the Application Insights portal page](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json).
+
+If your application runs in Azure Service Fabric, Cloud Service, Virtual Machines, or on-premises machines, the following instructions should be used.
+
+## Configure snapshot collection for ASP.NET applications
+
+1. [Enable Application Insights in your web app](../app/asp-net.md), if you haven't done it yet.
+
+2. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+
+3. If needed, customized the Snapshot Debugger configuration added to [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md). The default Snapshot Debugger configuration is mostly empty and all settings are optional. Here is an example showing a configuration equivalent to the default configuration:
+
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- The default is true, but you can disable Snapshot Debugging by setting it to false -->
+ <IsEnabled>true</IsEnabled>
+ <!-- Snapshot Debugging is usually disabled in developer mode, but you can enable it by setting this to true. -->
+ <!-- DeveloperMode is a property on the active TelemetryChannel. -->
+ <IsEnabledInDeveloperMode>false</IsEnabledInDeveloperMode>
+ <!-- How many times we need to see an exception before we ask for snapshots. -->
+ <ThresholdForSnapshotting>1</ThresholdForSnapshotting>
+ <!-- The maximum number of examples we create for a single problem. -->
+ <MaximumSnapshotsRequired>3</MaximumSnapshotsRequired>
+ <!-- The maximum number of problems that we can be tracking at any time. -->
+ <MaximumCollectionPlanSize>50</MaximumCollectionPlanSize>
+ <!-- How often we reconnect to the stamp. The default value is 15 minutes.-->
+ <ReconnectInterval>00:15:00</ReconnectInterval>
+ <!-- How often to reset problem counters. -->
+ <ProblemCounterResetInterval>1.00:00:00</ProblemCounterResetInterval>
+ <!-- The maximum number of snapshots allowed in ten minutes.The default value is 1. -->
+ <SnapshotsPerTenMinutesLimit>3</SnapshotsPerTenMinutesLimit>
+ <!-- The maximum number of snapshots allowed per day. -->
+ <SnapshotsPerDayLimit>30</SnapshotsPerDayLimit>
+ <!-- Whether or not to collect snapshot in low IO priority thread. The default value is true. -->
+ <SnapshotInLowPriorityThread>true</SnapshotInLowPriorityThread>
+ <!-- Agree to send anonymous data to Microsoft to make this product better. -->
+ <ProvideAnonymousTelemetry>true</ProvideAnonymousTelemetry>
+ <!-- The limit on the number of failed requests to request snapshots before the telemetry processor is disabled. -->
+ <FailedRequestLimit>3</FailedRequestLimit>
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+4. Snapshots are collected only on exceptions that are reported to Application Insights. In some cases (for example, older versions of the .NET platform), you might need to [configure exception collection](../app/asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal.
++
+## Configure snapshot collection for applications using ASP.NET Core LTS or above
+
+1. [Enable Application Insights in your ASP.NET Core web app](../app/asp-net-core.md), if you haven't done it yet.
+
+ > [!NOTE]
+ > Be sure that your application references version 2.1.1, or newer, of the Microsoft.ApplicationInsights.AspNetCore package.
+
+2. Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+
+3. Modify your application's `Startup` class to add and configure the Snapshot Collector's telemetry processor.
+ 1. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.5 or above is used, then add the following using statements to `Startup.cs`.
+
+ ```csharp
+ using Microsoft.ApplicationInsights.SnapshotCollector;
+ ```
+
+ Add the following at the end of the ConfigureServices method in the `Startup` class in `Startup.cs`.
+
+ ```csharp
+ services.AddSnapshotCollector((configuration) => Configuration.Bind(nameof(SnapshotCollectorConfiguration), configuration));
+ ```
+ 2. If [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.4 or below is used, then add the following using statements to `Startup.cs`.
+
+ ```csharp
+ using Microsoft.ApplicationInsights.SnapshotCollector;
+ using Microsoft.Extensions.Options;
+ using Microsoft.ApplicationInsights.AspNetCore;
+ using Microsoft.ApplicationInsights.Extensibility;
+ ```
+
+ Add the following `SnapshotCollectorTelemetryProcessorFactory` class to `Startup` class.
+
+ ```csharp
+ class Startup
+ {
+ private class SnapshotCollectorTelemetryProcessorFactory : ITelemetryProcessorFactory
+ {
+ private readonly IServiceProvider _serviceProvider;
+
+ public SnapshotCollectorTelemetryProcessorFactory(IServiceProvider serviceProvider) =>
+ _serviceProvider = serviceProvider;
+
+ public ITelemetryProcessor Create(ITelemetryProcessor next)
+ {
+ var snapshotConfigurationOptions = _serviceProvider.GetService<IOptions<SnapshotCollectorConfiguration>>();
+ return new SnapshotCollectorTelemetryProcessor(next, configuration: snapshotConfigurationOptions.Value);
+ }
+ }
+ ...
+ ```
+ Add the `SnapshotCollectorConfiguration` and `SnapshotCollectorTelemetryProcessorFactory` services to the startup pipeline:
+
+ ```csharp
+ // This method gets called by the runtime. Use this method to add services to the container.
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // Configure SnapshotCollector from application settings
+ services.Configure<SnapshotCollectorConfiguration>(Configuration.GetSection(nameof(SnapshotCollectorConfiguration)));
+
+ // Add SnapshotCollector telemetry processor.
+ services.AddSingleton<ITelemetryProcessorFactory>(sp => new SnapshotCollectorTelemetryProcessorFactory(sp));
+
+ // TODO: Add other services your application needs here.
+ }
+ }
+ ```
+
+4. If needed, customized the Snapshot Debugger configuration by adding a SnapshotCollectorConfiguration section to appsettings.json. All settings in the Snapshot Debugger configuration are optional. Here is an example showing a configuration equivalent to the default configuration:
+
+ ```json
+ {
+ "SnapshotCollectorConfiguration": {
+ "IsEnabledInDeveloperMode": false,
+ "ThresholdForSnapshotting": 1,
+ "MaximumSnapshotsRequired": 3,
+ "MaximumCollectionPlanSize": 50,
+ "ReconnectInterval": "00:15:00",
+ "ProblemCounterResetInterval":"1.00:00:00",
+ "SnapshotsPerTenMinutesLimit": 1,
+ "SnapshotsPerDayLimit": 30,
+ "SnapshotInLowPriorityThread": true,
+ "ProvideAnonymousTelemetry": true,
+ "FailedRequestLimit": 3
+ }
+ }
+ ```
+
+## Configure snapshot collection for other .NET applications
+
+1. If your application isn't already instrumented with Application Insights, get started by [enabling Application Insights and setting the instrumentation key](../app/windows-desktop.md).
+
+2. Add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package in your app.
+
+3. Snapshots are collected only on exceptions that are reported to Application Insights. You may need to modify your code to report them. The exception handling code depends on the structure of your application, but an example is below:
+
+ ```csharp
+ TelemetryClient _telemetryClient = new TelemetryClient();
+
+ void ExampleRequest()
+ {
+ try
+ {
+ // TODO: Handle the request.
+ }
+ catch (Exception ex)
+ {
+ // Report the exception to Application Insights.
+ _telemetryClient.TrackException(ex);
+
+ // TODO: Rethrow the exception if desired.
+ }
+ }
+ ```
+## Next steps
+
+- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+- See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
+
+ Title: Azure Application Insights Snapshot Debugger for .NET apps
+description: Debug snapshots are automatically collected when exceptions are thrown in production .NET apps
++ Last updated : 10/12/2021+++
+# Debug snapshots on exceptions in .NET apps
+When an exception occurs, you can automatically collect a debug snapshot from your live web application. The snapshot shows the state of source code and variables at the moment the exception was thrown. The Snapshot Debugger in [Azure Application Insights](../app/app-insights-overview.md) monitors exception telemetry from your web app. It collects snapshots on your top-throwing exceptions so that you have the information you need to diagnose issues in production. Include the [Snapshot collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application, and optionally configure collection parameters in [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md). Snapshots appear on [exceptions](../app/asp-net-exceptions.md) in the Application Insights portal.
+
+You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio 2019 Enterprise. In Visual Studio, you can also [set Snappoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
+
+Debug snapshots are stored for 15 days. This retention policy is set on a per-application basis. If you need to increase this value, you can request an increase by opening a support case in the Azure portal.
+
+## Enable Application Insights Snapshot Debugger for your application
+Snapshot collection is available for:
+* .NET Framework and ASP.NET applications running .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or later.
+* .NET Core and ASP.NET Core applications running .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) on Windows.
+* .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications on Windows.
+
+We don't recommend using .NET Core versions prior to LTS since they're out of support.
+
+The following environments are supported:
+
+* [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running OS family 4 or later
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running on Windows Server 2012 R2 or later
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later or Windows 8.1 or later
+
+> [!NOTE]
+> Client applications (for example, WPF, Windows Forms or UWP) are not supported.
+
+If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Troubleshooting guide](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json).
+
+## Grant permissions
+
+Access to snapshots is protected by Azure role-based access control (Azure RBAC). To inspect a snapshot, you must first be added to the necessary role by a subscription owner.
+
+> [!NOTE]
+> Owners and contributors do not automatically have this role. If they want to view snapshots, they must add themselves to the role.
+
+Subscription owners should assign the `Application Insights Snapshot Debugger` role to users who will inspect snapshots. This role can be assigned to individual users or groups by subscription owners for the target Application Insights resource or its resource group or subscription.
+
+1. Assign the Debugger role to the **Application Insights Snapshot**.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
++
+> [!IMPORTANT]
+> Please note that snapshots may contain personal data or other sensitive information in variable and parameter values. Snapshot data is stored in the same region as your App Insights resource.
+
+## View Snapshots in the Portal
+
+After an exception has occurred in your application and a snapshot has been created, you should have snapshots to view. It can take 5 to 10 minutes from an exception occurring to a snapshot ready and viewable from the portal. To view snapshots, in the **Failure** pane, select the **Operations** button when viewing the **Operations** tab, or select the **Exceptions** button when viewing the **Exceptions** tab:
+
+![Failures Page](./media/snapshot-debugger/failures-page.png)
+
+Select an operation or exception in the right pane to open the **End-to-End Transaction Details** pane, then select the exception event. If a snapshot is available for the given exception, an **Open Debug Snapshot** button appears on the right pane with details for the [exception](../app/asp-net-exceptions.md).
+
+![Open Debug Snapshot button on exception](./media/snapshot-debugger/e2e-transaction-page.png)
+
+In the Debug Snapshot view, you see a call stack and a variables pane. When you select frames of the call stack in the call stack pane, you can view local variables and parameters for that function call in the variables pane.
+
+![View Debug Snapshot in the portal](./media/snapshot-debugger/open-snapshot-portal.png)
+
+Snapshots might include sensitive information, and by default they aren't viewable. To view snapshots, you must have the `Application Insights Snapshot Debugger` role assigned to you.
+
+## View Snapshots in Visual Studio 2017 Enterprise or above
+1. Click the **Download Snapshot** button to download a `.diagsession` file, which can be opened by Visual Studio Enterprise.
+
+2. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you'll need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger).
+
+3. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown so that you can debug the current state of the process.
+
+ ![View debug snapshot in Visual Studio](./media/snapshot-debugger/open-snapshot-visual-studio.png)
+
+The downloaded snapshot includes any symbol files that were found on your web application server. These symbol files are required to associate snapshot data with source code. For App Service apps, make sure to enable symbol deployment when you publish your web apps.
+
+## How snapshots work
+
+The Snapshot Collector is implemented as an [Application Insights Telemetry Processor](../app/configuration-with-applicationinsights-config.md#telemetry-processors-aspnet). When your application runs, the Snapshot Collector Telemetry Processor is added to your application's telemetry pipeline.
+Each time your application calls [TrackException](../app/asp-net-exceptions.md#exceptions), the Snapshot Collector computes a Problem ID from the type of exception being thrown and the throwing method.
+Each time your application calls TrackException, a counter is incremented for the appropriate Problem ID. When the counter reaches the `ThresholdForSnapshotting` value, the Problem ID is added to a Collection Plan.
+
+The Snapshot Collector also monitors exceptions as they're thrown by subscribing to the [AppDomain.CurrentDomain.FirstChanceException](/dotnet/api/system.appdomain.firstchanceexception) event. When that event fires, the Problem ID of the exception is computed and compared against the Problem IDs in the Collection Plan.
+If there's a match, then a snapshot of the running process is created. The snapshot is assigned a unique identifier and the exception is stamped with that identifier. After the FirstChanceException handler returns, the thrown exception is processed as normal. Eventually, the exception reaches the TrackException method again where it, along with the snapshot identifier, is reported to Application Insights.
+
+The main process continues to run and serve traffic to users with little interruption. Meanwhile, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader creates a minidump and uploads it to Application Insights along with any relevant symbol (.pdb) files.
+
+> [!TIP]
+> - A process snapshot is a suspended clone of the running process.
+> - Creating the snapshot takes about 10 to 20 milliseconds.
+> - The default value for `ThresholdForSnapshotting` is 1. This is also the minimum value. Therefore, your app has to trigger the same exception **twice** before a snapshot is created.
+> - Set `IsEnabledInDeveloperMode` to true if you want to generate snapshots while debugging in Visual Studio.
+> - The snapshot creation rate is limited by the `SnapshotsPerTenMinutesLimit` setting. By default, the limit is one snapshot every ten minutes.
+> - No more than 50 snapshots per day may be uploaded.
+
+## Limitations
+
+The default data retention period is 15 days. For each Application Insights instance, a maximum number of 50 snapshots are allowed per day.
+
+### Publish symbols
+The Snapshot Debugger requires symbol files on the production server to decode variables and to provide a debugging experience in Visual Studio.
+Version 15.2 (or above) of Visual Studio 2017 publishes symbols for release builds by default when it publishes to App Service. In prior versions, you need to add the following line to your publish profile `.pubxml` file so that symbols are published in release mode:
+
+```xml
+ <ExcludeGeneratedDebugSymbol>False</ExcludeGeneratedDebugSymbol>
+```
+
+For Azure Compute and other types, make sure that the symbol files are in the same folder of the main application .dll (typically, `wwwroot/bin`) or are available on the current path.
+
+> [!NOTE]
+> For more information on the different symbol options that are available, see the [Visual Studio documentation](/visualstudio/ide/reference/advanced-build-settings-dialog-box-csharp?view=vs-2019&preserve-view=true#output
+). For best results, we recommend using "Full", "Portable" or "Embedded".
+
+### Optimized builds
+In some cases, local variables can't be viewed in release builds because of optimizations that are applied by the JIT compiler.
+However, in Azure App Services, the Snapshot Collector can deoptimize throwing methods that are part of its Collection Plan.
+
+> [!TIP]
+> Install the Application Insights Site Extension in your App Service to get deoptimization support.
+
+## Next steps
+Enable Application Insights Snapshot Debugger for your application:
+
+* [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+
+Beyond Application Insights Snapshot Debugger:
+
+* [Set snappoints in your code](/visualstudio/debugger/debug-live-azure-applications) to get snapshots without waiting for an exception.
+* [Diagnose exceptions in your web apps](../app/asp-net-exceptions.md) explains how to make more exceptions visible to Application Insights.
+* [Smart Detection](../app/proactive-diagnostics.md) automatically discovers performance anomalies.
azure-monitor Workbooks Honey Comb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-honey-comb.md
The image below shows the CPU utilization of virtual machines across two subscr
| where CounterName == 'Available MBytes' | summarize CounterValue = avg(CounterValue) by Computer, _ResourceId | extend ResourceGroup = extract(@'/subscriptions/.+/resourcegroups/(.+)/providers/microsoft.compute/virtualmachines/.+', 1, _ResourceId)
-| extend ResourceGroup = iff(ResourceGroup == '', 'On-premise computers', ResourceGroup), Id = strcat(_ResourceId, '::', Computer)
+| extend ResourceGroup = iff(ResourceGroup == '', 'On-premises computers', ResourceGroup), Id = strcat(_ResourceId, '::', Computer)
``` 5. Run query.
The image below shows the CPU utilization of virtual machines across two subscr
| `Heatmap` | In this type, the cells are colored based on a metric column and a color palette. This provides a simple way to highlight metrics spreads across cells. | | `Thresholds` | In this type, cell colors are set by threshold rules (for example, _CPU > 90% => Red, 60% > CPU > 90% => Yellow, CPU < 60% => Green_) | | `Field Based` | In this type, a column provides specific RGB values to use for the node. Provides the most flexibility but usually requires more work to enable. |
-
+ ## Node format settings Honey comb authors can specify what content goes to the different parts of a node: top, left, center, right, and bottom. Authors are free to use any of the renderers workbooks supports (text, big number, spark lines, icon, etc.).
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md
Title: Troubleshoot Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs
-description: Provides troubleshooting content for using the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
+ Title: Troubleshoot Azure Application Consistent Snapshot tool - Azure NetApp Files
+description: Troubleshoot communication issues, test failures, and other SAP HANA issues when using the Azure Application Consistent Snapshot (AzAcSnap) tool.
documentationcenter: ''
na Previously updated : 05/17/2021 Last updated : 06/13/2022 +
-# Troubleshoot Azure Application Consistent Snapshot tool
+# Troubleshoot the Azure Application Consistent Snapshot (AzAcSnap) tool
-This article provides troubleshooting content for using the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files and Azure Large Instance.
+This article describes how to troubleshoot issues when using the Azure Application Consistent Snapshot (AzAcSnap) tool for Azure NetApp Files and Azure Large Instance.
-The following are common issues that you may encounter while running the commands. Follow the resolution instructions mentioned to fix the issue. If you still encounter an issue, open a Service Request from Azure portal and assign the request into the SAP HANA Large Instance queue for Microsoft Support to respond.
+You might encounter several common issues when running AzAcSnap commands. Follow the instructions to troubleshoot the issues. If you still have issues, open a Service Request for Microsoft Support from the Azure portal and assign the request to the SAP HANA Large Instance queue.
-## Log files
+## Check log files, result files, and syslog
-One of the best sources of information for debugging any errors with AzAcSnap are the log files.
+Some of the best sources of information for investigating AzAcSnap issues are the log files, result files, and the system log.
-### Log file location
+### Log files
-The log files are stored in the directory configured per the `logPath` parameter in the AzAcSnap configuration file. The default configuration filename is `azacsnap.json` and the default value for `logPath` is `"./logs"` which means the log files are written into the `./logs` directory relative to where the `azacsnap` command is run. Making the `logPath` an absolute location (e.g. `/home/azacsnap/logs`) will ensure `azacsnap` outputs the logs into `/home/azacsnap/logs` irrespective of where the `azacsnap` command was run.
+The AzAcSnap log files are stored in the directory configured by the `logPath` parameter in the AzAcSnap configuration file. The default configuration filename is *azacsnap.json*, and the default value for `logPath` is *./logs*, which means the log files are written into the *./logs* directory relative to where the `azacsnap` command runs. If you make the `logPath` an absolute location, such as */home/azacsnap/logs*, `azacsnap` always outputs the logs into */home/azacsnap/logs*, regardless of where you run the `azacsnap` command.
-### Log file naming
+The log filename is based on the application name, `azacsnap`, the command run with `-c`, such as `backup`, `test`, or `details`, and the default configuration filename, such as *azacsnap.json*. With the `-c backup` command, a default log filename would be *azacsnap-backup-azacsnap.log*, written into the directory configured by `logPath`.
-The log filename is based on the application name (e.g. `azacsnap`), the command option (`-c`) used (e.g. `backup`, `test`, `details`, etc.) and the configuration filename (e.g. default = `azacsnap.json`). So if using the `-c backup` command, the log filename by default would be `azacsnap-backup-azacsnap.log` and is written into the directory configured by `logPath`.
+This naming convention allows for multiple configuration files, one per database, to help locate the associated log files. If the configuration filename is *SID.json*, then the log filename when using the `azacsnap -c backup --configfile SID.json` option is *azacsnap-backup-SID.log*.
-This naming convention was established to allow for multiple configuration files, one per database, and ensure ease of locating the associated logfiles. Therefore, if the configuration filename is `SID.json`, then the result filename when using the `azacsnap -c backup --configfile SID.json` options will be `azacsnap-backup-SID.log`.
+### Result files and syslog
-### Result file and syslog
+For the `-c backup` command, AzAcSnap writes to a *\*.result* file and to the system log, `/var/log/messages`, by using the `logger` command. The *\*.result* filename has the same base name as the log file, and goes into the same location. The *\*.result* file is a simple one line output file, such as the following example:
-For the `-c backup` command option AzAcSnap writes out to a `*.result` file and the system log (`/var/log/messages`) using the `logger` command. The `*.result` filename has the same base name as the [log file](#log-file-naming) and goes into the same [location as the log file](#log-file-location). It is a simple one line output file per the following examples.
-
-Example output from `*.result` file.
```output Database # 1 (PR1) : completed ok ```
-Example output from `/var/log/messages` file.
+Here's example output from `/var/log/messages`:
+ ```output Dec 17 09:01:13 azacsnap-rhel azacsnap: Database # 1 (PR1) : completed ok ```
-## Failed communication with Azure NetApp Files
+## Troubleshoot failed 'test storage' command
-When validating communication with Azure NetApp Files, communication might fail or time-out. Check to ensure firewall rules are not blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:-
+The command `azacsnap -c test --test storage` might not complete successfully.
-- (https://)management.azure.com:443-- (https://)login.microsoftonline.com:443
+### Check network firewalls
-### Testing communication using Cloud Shell
+Communication with Azure NetApp Files might fail or time out. To troubleshoot, make sure firewall rules aren't blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:
-You can test the Service Principal is configured correctly by using Cloud Shell through your Azure Portal. This will test that the configuration is correct bypassing network controls within a VNet or virtual machine.
+- `https://management.azure.com:443`
+- `https://login.microsoftonline.com:443`
-**Solution:**
+### Use Cloud Shell to validate configuration files
-1. Open a [Cloud Shell](../cloud-shell/overview.md) session in your Azure Portal.
-1. Make a test directory (e.g. `mkdir azacsnap`)
-1. cd to the azacsnap directory and download the latest version of azacsnap tool.
-
- ```bash
- wget https://aka.ms/azacsnapinstaller
- ```
+You can test whether the service principal is configured correctly by using Cloud Shell through the Azure portal. Using Cloud Shell tests for correct configuration, bypassing network controls within a virtual network or virtual machine (VM).
+
+1. In the Azure portal, open a [Cloud Shell](../cloud-shell/overview.md) session.
+1. Make a test directory, for example `mkdir azacsnap`.
+1. Switch to the *azacsnap* directory, and download the latest version of AzAcSnap.
- ```output
- -<snip>-
- HTTP request sent, awaiting response... 200 OK
- Length: 24402411 (23M) [application/octet-stream]
- Saving to: ΓÇÿazacsnapinstallerΓÇÖ
-
- azacsnapinstaller 100%[=================================================================================>] 23.27M 5.94MB/s in 5.3s
-
- 2021-09-02 23:46:18 (4.40 MB/s) - ΓÇÿazacsnapinstallerΓÇÖ saved [24402411/24402411]
- ```
-
-1. Make the installer executable. (e.g. `chmod +x azacsnapinstaller`)
+ ```bash
+ wget https://aka.ms/azacsnapinstaller
+ ```
+1. Make the installer executable, for example `chmod +x azacsnapinstaller`.
1. Extract the binary for testing.
- ```bash
- ./azacsnapinstaller -X -d .
- ```
-
- ```output
- +--+
- | Azure Application Consistent Snapshot Tool Installer |
- +--+
- |-> Installer version '5.0.2_Build_20210827.19086'
- |-> Extracting commands into ..
- |-> Cleaning up .NET extract dir
- ```
+ ```bash
+ ./azacsnapinstaller -X -d .
+ ```
+ The results look like the following output:
-1. Using the Cloud Shell Upload/Download icon, upload the Service Principal file (e.g. `azureauth.json`) and the AzAcSnap configuration file for testing (e.g. `azacsnap.json`)
-1. Run the Storage test from the Azure Cloud Shell console.
+ ```output
+ +--+
+ | Azure Application Consistent Snapshot Tool Installer |
+ +--+
+ |-> Installer version '5.0.2_Build_20210827.19086'
+ |-> Extracting commands into ..
+ |-> Cleaning up .NET extract dir
+ ```
- > [!NOTE]
- > The test command can take about 90 seconds to complete.
+1. Use the Cloud Shell Upload/Download icon to upload the service principal file, *azureauth.json*, and the AzAcSnap configuration file, such as *azacsnap.json*, for testing.
+1. Run the `storage` test.
- ```bash
- ./azacsnap -c test --test storage
- ```
+ ```bash
+ ./azacsnap -c test --test storage
+ ```
- ```output
- BEGIN : Test process started for 'storage'
- BEGIN : Storage test snapshots on 'data' volumes
- BEGIN : 1 task(s) to Test Snapshots for Storage Volume Type 'data'
- PASSED: Task#1/1 Storage test successful for Volume
- END : Storage tests complete
- END : Test process complete for 'storage'
- ```
+ > [!NOTE]
+ > The test command can take about 90 seconds to complete.
-## Problems with SAP HANA
+### Failed test on Azure Large Instance
-### Running the test command fails
+The following error example is from running `azacsnap` on Azure Large Instance:
-When validating communication with SAP HANA by running a test with `azacsnap -c test --test hana` and it provides the following error:
+```bash
+azacsnap -c test --test storage
+```
```output
-> azacsnap -c test --test hana
-BEGIN : Test process started for 'hana'
-BEGIN : SAP HANA tests
-CRITICAL: Command 'test' failed with error:
-Cannot get SAP HANA version, exiting with error: 127
+The authenticity of host '172.18.18.11 (172.18.18.11)' can't be established.
+ECDSA key fingerprint is SHA256:QxamHRn3ZKbJAKnEimQpVVCknDSO9uB4c9Qd8komDec.
+Are you sure you want to continue connecting (yes/no)?
```
-**Solution:**
+To troubleshoot this error, don't respond `yes`. Make sure that your storage IP address is correct. You can confirm the storage IP address with the Microsoft operations team.
-1. Check the configuration file (for example, `azacsnap.json`) for each HANA instance to ensure the SAP HANA database values are correct.
-1. Try to run the command below to verify if the `hdbsql` command is in the path and it can connect to the SAP HANA Server. The following example shows the correct running of the command and its output.
+The error usually appears when the Azure Large Instance storage user doesn't have access to the underlying storage. To determine whether the storage user has access to storage, run the `ssh` command to validate communication with the storage platform.
- ```bash
- hdbsql -n 172.18.18.50 - i 00 -d SYSTEMDB -U AZACSNAP "\s"
- ```
+```bash
+ssh <StorageBackupname>@<Storage IP address> "volume show -fields volume"
+```
- ```output
- host : 172.18.18.50
- sid : H80
- dbname : SYSTEMDB
- user : AZACSNAP
- kernel version: 2.00.040.00.1553674765
- SQLDBC version: libSQLDBCHDB 2.04.126.1551801496
- autocommit : ON
- locale : en_US.UTF-8
- input encoding: UTF8
- sql port : saphana1:30013
- ```
+The following example shows the expected output:
- In this example, the `hdbsql` command isn't in the users `$PATH`.
+```bash
+ssh clt1h80backup@10.8.0.16 "volume show -fields volume"
+```
- ```bash
- hdbsql -n 172.18.18.50 - i 00 -U AZACSNAP "select version from sys.m_database"
- ```
+```output
+vserver volume
+
+osa33-hana-c01v250-client25-nprod hana_data_h80_mnt00001_t020_vol
+osa33-hana-c01v250-client25-nprod hana_data_h80_mnt00002_t020_vol
+```
- ```output
- If 'hdbsql' is not a typo you can use command-not-found to lookup the package that contains it, like this:
- cnf hdbsql
- ```
+### Failed test with Azure NetApp Files
- In this example, the `hdbsql` command is temporarily added to the user's `$PATH`, but when run shows the connection key hasn't been set up correctly with the `hdbuserstore Set` command (refer to Getting Started guide for details):
+The following error example is from running `azacsnap` with Azure NetApp Files:
- ```bash
- export PATH=$PATH:/hana/shared/H80/exe/linuxx86_64/hdb/
- ```
+```bash
+azacsnap --configfile azacsnap.json.NOT-WORKING -c test --test storage
+```
- ```bash
- hdbsql -n 172.18.18.50 -i 00 -U AZACSNAP "select version from sys.m_database"
- ```
+```output
+BEGIN : Test process started for 'storage'
+BEGIN : Storage test snapshots on 'data' volumes
+BEGIN : 1 task(s) to Test Snapshots for Storage Volume Type 'data'
+ERROR: Could not create StorageANF object [authFile = 'azureauth.json']
+```
- ```output
- * -10104: Invalid value for KEY (AZACSNAP)
- ```
+To troubleshoot this error:
- > [!NOTE]
- > To permanently add to the user's `$PATH`, update the user's `$HOME/.profile` file
+1. Check for the existence of the service principal file, *azureauth.json*, as set in the *azacsnap.json* configuration file.
+1. Check the log file, for example, *logs/azacsnap-test-azacsnap.log*, to see if the service principal file has the correct content. The following log file output shows that the client secret key is invalid.
-### Insufficient privilege
+ ```output
+ [19/Nov/2020:18:39:49 +13:00] DEBUG: [PID:0020080:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000215: Invalid client secret is provided.
+ ```
-If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check to ensure the appropriate privilege has been asssigned to the "AZACSNAP" database user (assuming this is the user created per the [installation guide](azacsnap-installation.md#enable-communication-with-database)). Verify the user's current privilege with the following command:
+1. Check the log file to see if the service principal has expired. The following log file example shows that the client secret keys are expired.
-```bash
-hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges " | grep -i -e GRANTEE -e azacsnap
-```
+ ```output
+ [19/Nov/2020:18:41:10 +13:00] DEBUG: [PID:0020257:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://docs.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials
+ ```
-```output
-GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE
-"AZACSNAP","USER","BACKUP ADMIN","TRUE","FALSE"
-"AZACSNAP","USER","CATALOG READ","TRUE","FALSE"
-"AZACSNAP","USER","CREATE ANY","TRUE","TRUE"
-```
+## Troubleshoot failed 'test hana' command
-The error might also provide further information to help determine the required SAP HANA privileges, such as the output of `Detailed info for this error can be found with guid '99X9999X99X9999X99X99XX999XXX999' SQLSTATE: HY000`. In this case follow SAP's instructions at [SAP Help Portal - GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.05/en-US/9a73c4c017744288b8d6f3b9bc0db043.html) which recommends using the following SQL query to determine the detail on the required privilege.
+The command `azacsnap -c test --test hana` might not complete successfully.
-```sql
-CALL SYS.GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS ('99X9999X99X9999X99X99XX999XXX999', ?)
-```
+### Command not found
-```output
-GUID,CREATE_TIME,CONNECTION_ID,SESSION_USER_NAME,CHECKED_USER_NAME,PRIVILEGE,IS_MISSING_ANALYTIC_PRIVILEGE,IS_MISSING_GRANT_OPTION,DATABASE_NAME,SCHEMA_NAME,OBJECT_NAME,OBJECT_TYPE
-"99X9999X99X9999X99X99XX999XXX999","2021-01-01 01:00:00.180000000",120212,"AZACSNAP","AZACSNAP","DATABASE ADMIN or DATABASE BACKUP ADMIN","FALSE","FALSE","","","",""
-```
+When setting up communication with SAP HANA, the `hdbuserstore` program is used to create the secure communication settings. AzAcSnap also requires the `hdbsql` program for all communications with SAP HANA. These programs are usually under */usr/sap/\<SID>/SYS/exe/hdb/* or */usr/sap/hdbclient* and must be in the users `$PATH`.
-In the example above, adding the 'DATABASE BACKUP ADMIN' privilege to the SYSTEMDB's AZACSNAP user, should resolve the insufficient privilege error.
+- In the following example, the `hdbsql` command isn't in the users `$PATH`.
-### The `hdbuserstore` location
+ ```bash
+ hdbsql -n 172.18.18.50 - i 00 -U AZACSNAP "select version from sys.m_database"
+ ```
-When setting up communication with SAP HANA, the `hdbuserstore` program is used to create the secure communication settings. The `hdbuserstore` program is usually found under `/usr/sap/<SID>/SYS/exe/hdb/` or `/usr/sap/hdbclient`. Normally the installer adds the correct location to the `azacsnap` user's `$PATH`.
+ ```output
+ If 'hdbsql' is not a typo you can use command-not-found to lookup the package that contains it, like this:
+ cnf hdbsql
+ ```
-## Failed test with storage
+- The following example temporarily adds the `hdbsql` command to the user's `$PATH`, allowing `azacsnap` to run correctly.
-The command `azacsnap -c test --test storage` does not complete successfully.
+ ```bash
+ export PATH=$PATH:/hana/shared/H80/exe/linuxx86_64/hdb/
+ ```
-### Azure Large Instance
+Make sure the installer added the location of these files to the AzAcSnap user's `$PATH`.
-The following example is from running `azacsnap` on SAP HANA on Azure Large Instance:
+> [!NOTE]
+> To permanently add to the user's `$PATH`, update the user's *$HOME/.profile* file.
-```bash
-azacsnap -c test --test storage
-```
+### Invalid value for key
-```output
-The authenticity of host '172.18.18.11 (172.18.18.11)' can't be established.
-ECDSA key fingerprint is SHA256:QxamHRn3ZKbJAKnEimQpVVCknDSO9uB4c9Qd8komDec.
-Are you sure you want to continue connecting (yes/no)?
-```
+This command output shows that the connection key hasn't been set up correctly with the `hdbuserstore Set` command.
-**Solution:** The above error normally shows up when Azure Large Instance storage user has no access to the underlying storage. To validate access to storage with the provided storage user, run the `ssh`
-command to validate communication with the storage platform.
+ ```bash
+ hdbsql -n 172.18.18.50 -i 00 -U AZACSNAP "select version from sys.m_database"
+ ```
-```bash
-ssh <StorageBackupname>@<Storage IP address> "volume show -fields volume"
-```
+ ```output
+ * -10104: Invalid value for KEY (AZACSNAP)
+ ```
-An example with expected output:
+For more information on setup of the `hdbuserstore`, see [Get started with AzAcSnap](azacsnap-get-started.md).
-```bash
-ssh clt1h80backup@10.8.0.16 "volume show -fields volume"
-```
+### Failed test
+
+When validating communication with SAP HANA by running a test with `azacsnap -c test --test hana`, you might get the following error:
```output
-vserver volume
-
-osa33-hana-c01v250-client25-nprod hana_data_h80_mnt00001_t020_vol
-osa33-hana-c01v250-client25-nprod hana_data_h80_mnt00002_t020_vol
+> azacsnap -c test --test hana
+BEGIN : Test process started for 'hana'
+BEGIN : SAP HANA tests
+CRITICAL: Command 'test' failed with error:
+Cannot get SAP HANA version, exiting with error: 127
```
-#### The authenticity of host '172.18.18.11 (172.18.18.11)' can't be established
+To troubleshoot this error:
-```bash
-azacsnap -c test --test storage
-```
+1. Check the configuration file, for example *azacsnap.json*, for each HANA instance, to ensure that the SAP HANA database values are correct.
+1. Run the following command to verify that the `hdbsql` command is in the path and that it can connect to the SAP HANA server.
-```output
-BEGIN : Test process started for 'storage'
-BEGIN : Storage test snapshots on 'data' volumes
-BEGIN : 1 task(s) to Test Snapshots for Storage Volume Type 'data'
-The authenticity of host '10.3.0.18 (10.3.0.18)' can't be established.
-ECDSA key fingerprint is SHA256:cONAr0lpafb7gY4l31AdWTzM3s9LnKDtpMdPA+cxT7Y.
-Are you sure you want to continue connecting (yes/no)?
-```
+ ```bash
+ hdbsql -n 172.18.18.50 - i 00 -d SYSTEMDB -U AZACSNAP "\s"
+ ```
+
+ The following example shows the output when the command runs correctly:
-**Solution:** Do not select Yes. Ensure that your storage IP address is correct. If there is still an
-issue, confirm the storage IP address with Microsoft operations team.
+ ```output
+ host : 172.18.18.50
+ sid : H80
+ dbname : SYSTEMDB
+ user : AZACSNAP
+ kernel version: 2.00.040.00.1553674765
+ SQLDBC version: libSQLDBCHDB 2.04.126.1551801496
+ autocommit : ON
+ locale : en_US.UTF-8
+ input encoding: UTF8
+ sql port : saphana1:30013
+ ```
-### Azure NetApp Files
+### Insufficient privilege error
-The following example is from running `azacsnap` on a VM using Azure NetApp Files:
+If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check that the user has the appropriate AZACSNAP database user privileges set up per the [installation guide](azacsnap-installation.md#enable-communication-with-database). Verify the user's privileges with the following command:
```bash
-azacsnap --configfile azacsnap.json.NOT-WORKING -c test --test storage
+hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges " | grep -i -e GRANTEE -e azacsnap
```
+The command should return the following output:
+ ```output
-BEGIN : Test process started for 'storage'
-BEGIN : Storage test snapshots on 'data' volumes
-BEGIN : 1 task(s) to Test Snapshots for Storage Volume Type 'data'
-ERROR: Could not create StorageANF object [authFile = 'azureauth.json']
+GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE
+"AZACSNAP","USER","BACKUP ADMIN","TRUE","FALSE"
+"AZACSNAP","USER","CATALOG READ","TRUE","FALSE"
+"AZACSNAP","USER","CREATE ANY","TRUE","TRUE"
```
-**Solution:**
-
-1. Check for the existence of the Service Principal file, `azureauth.json`, as set in the `azacsnap.json` configuration file.
-1. Check the log file (for example, `logs/azacsnap-test-azacsnap.log`) to see if the Service Principal (`azureauth.json`) has the correct content. Example from log as follows:
+The error might provide further information to help determine the required SAP HANA privileges, such as `Detailed info for this error can be found with guid '99X9999X99X9999X99X99XX999XXX999' SQLSTATE: HY000`. In this case, follow the instructions at [SAP Help Portal - GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.05/en-US/9a73c4c017744288b8d6f3b9bc0db043.html), which recommend using the following SQL query to determine the details of the required privilege:
- ```output
- [19/Nov/2020:18:39:49 +13:00] DEBUG: [PID:0020080:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000215: Invalid client secret is provided.
- ```
+```sql
+CALL SYS.GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS ('99X9999X99X9999X99X99XX999XXX999', ?)
+```
-1. Check the log file (for example, `logs/azacsnap-test-azacsnap.log`) to see if the Service Principal (`azureauth.json`) has expired. Example from log as follows:
+```output
+GUID,CREATE_TIME,CONNECTION_ID,SESSION_USER_NAME,CHECKED_USER_NAME,PRIVILEGE,IS_MISSING_ANALYTIC_PRIVILEGE,IS_MISSING_GRANT_OPTION,DATABASE_NAME,SCHEMA_NAME,OBJECT_NAME,OBJECT_TYPE
+"99X9999X99X9999X99X99XX999XXX999","2021-01-01 01:00:00.180000000",120212,"AZACSNAP","AZACSNAP","DATABASE ADMIN or DATABASE BACKUP ADMIN","FALSE","FALSE","","","",""
+```
- ```output
- [19/Nov/2020:18:41:10 +13:00] DEBUG: [PID:0020257:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://docs.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials
- ```
+In the preceding example, adding the `DATABASE BACKUP ADMIN` privilege to the SYSTEMDB's AZACSNAP user should resolve the insufficient privilege error.
## Next steps -- [Tips](azacsnap-tips.md)
+- [Tips and tricks for using AzAcSnap](azacsnap-tips.md)
+- [AzAcSnap command reference](azacsnap-cmd-ref-configure.md)
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This article provides references to best practices that can help you understand
The following diagram summarizes the categories of solution architectures that Azure NetApp Files offers:
-![Solution architecture categories](../media/azure-netapp-files/solution-architecture-categories.png)
## Linux OSS Apps and Database solutions
azure-percept Overview Ai Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-ai-models.md
With pre-trained models, no coding or training data collection is required. Simp
## Reference solutions
-A people counting reference solution is also available. This reference solution is an open-source AI application providing edge-based people counting with user-defined zone entry/exit events. Video and AI output from the on-premise edge device is egressed to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/), with the user interface running as an Azure website. AI inferencing is provided by an open-source AI model for people detection.
+A people counting reference solution is also available. This reference solution is an open-source AI application providing edge-based people counting with user-defined zone entry/exit events. Video and AI output from the on-premises edge device is egressed to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/), with the user interface running as an Azure website. AI inferencing is provided by an open-source AI model for people detection.
:::image type="content" source="./media/overview-ai-models/people-detector.gif" alt-text="Spatial analytics pre-built solution gif.":::
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | communicationservices | Yes | Yes | No |
+> | communicationservices | Yes | Yes <br/><br/> Note that resources with attached phone numbers cannot be moved to subscriptions in different data locations, nor subscriptions that do not support having phone numbers. | No |
## Microsoft.Compute
azure-signalr Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-metrics.md
description: Metrics in Azure SignalR Service.
Previously updated : 04/08/2022 Last updated : 06/03/2022 # Metrics in Azure SignalR Service
-Azure SignalR Service has some built-in metrics and you and sets up [alerts](../azure-monitor/alerts/alerts-overview.md) and [autoscale](./signalr-howto-scale-autoscale.md) base on metrics.
+Metrics in SignalR Service is an implementation of [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). Understanding how Azure Monitor collects and displays metrics is helpful for using metrics in SignalR Service. Azure SignalR Service defines a collection of metrics that can be used to set up [alerts](../azure-monitor/alerts/alerts-overview.md) and [autoscale conditions](./signalr-howto-scale-autoscale.md).
-## Understand metrics
+## SignalR Service metrics
-Metrics provide the running info of the service. The available metrics are:
-
-> [!CAUTION]
-> The aggregation type "Count" is meaningless for all the metrics. Please DO NOT use it.
+Metrics provide insights into the operational state of the service. The available metrics are:
|Metric|Unit|Recommended Aggregation Type|Description|Dimensions| ||||||
-|Connection Close Count|Count|Sum|The count of connections closed by various reasons.|Endpoint, ConnectionCloseCategory|
-|Connection Count|Count|Max / Avg|The amount of connection.|Endpoint|
-|Connection Open Count|Count|Sum|The count of new connections opened.|Endpoint|
-|Connection Quota Utilization|Percent|Max / Avg|The percentage of connection connected relative to connection quota.|No Dimensions|
-|Inbound Traffic|Bytes|Sum|The inbound traffic of service|No Dimensions|
-|Message Count|Count|Sum|The total amount of messages.|No Dimensions|
-|Outbound Traffic|Bytes|Sum|The outbound traffic of service|No Dimensions|
-|System Errors|Percent|Avg|The percentage of system errors|No Dimensions|
-|User Errors|Percent|Avg|The percentage of user errors|No Dimensions|
-|Server Load|Percent|Max / Avg|The percentage of server load|No Dimensions|
+|**Connection Close Count**|Count|Sum|The count of connections closed for various reasons; see ConnectionCloseCategory for details.|Endpoint, ConnectionCloseCategory|
+|**Connection Count**|Count|Max or Avg|The number of connections.|Endpoint|
+|**Connection Open Count**|Count|Sum|The count of new connections opened.|Endpoint|
+|**Connection Quota Utilization**|Percent|Max or Avg|The percentage of connections to the server relative to the available quota.|No Dimensions|
+|**Inbound Traffic**|Bytes|Sum|The volume of inbound traffic to the service.|No Dimensions|
+|**Message Count**|Count|Sum|The total number of messages.|No Dimensions|
+|**Outbound Traffic**|Bytes|Sum|The volume of outbound traffic from the service.|No Dimensions|
+|**System Errors**|Percent|Avg|The percentage of system errors.|No Dimensions|
+|**User Errors**|Percent|Avg|The percentage of user errors.|No Dimensions|
+|**Server Load**|Percent|Max or Avg|The percentage of server load.|No Dimensions|
+
+> [!NOTE]
+> The aggregation type **Count** is the count of sampling data received. Count is defined as a general metrics aggregation type and can't be excluded from the list of available aggregation types. It's not generally useful for SignalR Service but it can sometimes be used to check if the sampling data has been sent to metrics.
+
+### Metrics dimensions
+
+A *dimension* is a name-value pair with extra data to describe the metric value. Some metrics don't have dimensions; others have multiple dimensions.
-### Understand Dimensions
+The following two sections describe the dimensions available in SignalR Service metrics.
-Dimensions of a metric are name/value pairs that carry extra data to describe the metric value.
+#### Endpoint
-The dimensions available in some metrics:
+Describes the type of connection. Includes dimension values: **Client**, **Server**, and **LiveTrace**.
-* Endpoint: Describe the type of connection. Including dimension values: Client, Server, LiveTrace
-* ConnectionCloseCategory: Describe the categories of why connection getting closed. Including dimension values:
- - Normal: Normal closure.
- - Throttled: With (Message count/rate or connection) throttling, check Connection Count and Message Count current usage and your resource limits.
- - PingTimeout: Connection ping timeout.
- - NoAvailableServerConnection: Client connection cannot be established (won't even pass handshake) as no available server connection.
- - InvokeUpstreamFailed: Upstream invoke failed.
- - SlowClient: Too many messages queued up at service side, which needed to be sent.
- - HandshakeError: Terminate connection in handshake phase, could be caused by the remote party closed the WebSocket connection without completing the close handshake. Mostly, it's caused by network issue. Otherwise, please check if the client is able to create websocket connection due to some browser settings.
- - ServerConnectionNotFound: Target hub server not available. Nothing need to be done for improvement, this is by-design and reconnection should be done after this drop.
- - ServerConnectionClosed: Client connection aborted because the corresponding server connection is dropped. When app server uses Azure SignalR Service SDK, in the background, it initiates server connections to the remote Azure SignalR service. Each client connection to the service is associated with one of the server connections to route traffic between client and app server. Once a server connection is closed, all the client connections it serves will be closed with ServerConnectionDropped message.
- - ServiceTransientError: Internal server error
- - BadRequest: This caused by invalid hub name, wrong payload, etc.
- - ClosedByAppServer: App server asks the service to close the client.
- - ServiceReload: This is triggered when a connection is dropped due to an internal service component reload. This event does not indicate a malfunction and is part of normal service operation.
- - ServiceModeSwitched: Connection closed after service mode switched like from serverless mode to default mode
- - Unauthorized: The connection is unauthorized
+#### ConnectionCloseCategory
-Learn more about [multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics)
+Gives the reason for closing the connection. Includes the following dimension values.
-### Understand the minimum grain of message count
+| Value | Description |
+||--|
+| **Normal** | Connection closed normally.|
+|**Throttled**|With (Message count/rate or connection) throttling, check Connection Count and Message Count current usage and your resource limits.|
+|**PingTimeout**|Connection ping timeout.|
+|**NoAvailableServerConnection**|Client connection can't be established (won't even pass handshake) because there's no available server connection.|
+|**InvokeUpstreamFailed**|Upstream invoke failed.|
+|**SlowClient**|Too many unsent messages queued up at service side.|
+|**HandshakeError**|Connection terminated in the handshake phase, could be caused by the remote party closing the WebSocket connection without completing the close handshake. HandshakeError is caused by a network issue. Check browser settings to see if the client is able to create a websocket connection.|
+|**ServerConnectionNotFound**|Target hub server not available. Nothing need to be done for improvement, it's by design and reconnection should be done after this drop.|
+|**ServerConnectionClosed**|Client connection closed because the corresponding server connection was dropped. When app server uses Azure SignalR Service SDK, in the background, it initiates server connections to the remote Azure SignalR Service. Each client connection to the service is associated with one of the server connections to route traffic between the client and app server. Once a server connection is closed, all the client connections it serves will be closed with the **ServerConnectionDropped** message.|
+|**ServiceTransientError**|Internal server error.|
+|**BadRequest**|A bad request is caused by an invalid hub name, wrong payload, or a malformed request.|
+|**ClosedByAppServer**|App server asked the service to close the client.|
+|**ServiceReload**|Service reload is triggered when a connection is dropped due to an internal service component reload. This event doesn't indicate a malfunction and is part of normal service operation.|
+|**ServiceModeSwitched**|Connection closed after service mode switched, such as from Serverless mode to Default mode.|
+|**Unauthorized**|The connection is unauthorized.|
-The minimum grain of message count showed in metric are 1, which means 2 KB outbound data traffic. If user sending very small amount of messages such as several bytes in a sampling time period, the message count will be 0.
+For more information, see [multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics) in Azure Monitor.
-The way to check out small amount of messages is using metrics *Outbound Traffic*, which is count by bytes.
+### Message Count granularity
-### Understand System Errors and User Errors
+The minimum Message Count granularity is two KB of outbound data traffic. Every two KB is one unit for Message Count. If a client is sending small or infrequent messages totaling less than one unit in a sampling time period, the message count will be zero (0). The count is zero even though messages were sent. The way to check for a small number/size of messages is by using the metric **Outbound Traffic**, which is a count of bytes sent.
-The Errors are the percentage of failure operations. Operations are consist of connecting, sending message and so on. The difference between System Error and User Error is that the former is the failure caused by our internal service error and the latter is caused by users. In normal case, the System Errors should be very low and near to zero.
+### System errors and user errors
+
+The **User Errors** and **System Errors** metrics are the percentage of attempted operations (connecting, sending a message, and so on) that failed. A system error is a failure in the internal system logic. A user error is generally an application error, often related to networking. Normally, the percentage of system errors should be low, near zero.
> [!IMPORTANT]
-> In some cases, the User Error will be always very high, especially in serverless case. In some browser, when user close the web page, the SignalR client doesn't close gracefully. The service will finally close it because of timeout. The timeout closure will be counted into User Error.
+> In some situations, the user errors rate will be very high, especially in Serverless mode. In some browsers, when a user closes the web page the SignalR client doesn't shut down gracefully. A connection may remain open but unresponsive, until SignalR Service will finally close it because of timeout. The timeout closure will be counted in the User Error metric.
### Metrics suitable for autoscaling
-Connection Quota Utilization and Server load are percentage metrics which show the usage **under current unit** configuration. So they could be used to set autoscaling rules. For example, you could set a rule to scale up if the server load is greater than 70%.
+>[!NOTE]
+> Autoscaling is a Premium Tier feature only.
+
+**Connection Quota Utilization** and **Server Load** show the percentage of utilization or load compared to the currently allocated unit count. These metrics are commonly used in autoscaling rules.
+
+For example, if the current allocation is one unit and there are 750 connections to the service, the Connection Quota Utilization is 750/1000 = 0.75. Server Load is calculated similarly, using values for compute capacity.
-Learn more about [autoscale](./signalr-howto-scale-autoscale.md)
+To learn more about autoscaling, see [Automatically scale units of an Azure SignalR Service](./signalr-howto-scale-autoscale.md)
## Related resources -- [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr )
+- [Automatically scale units of an Azure SignalR Service](signalr-howto-scale-autoscale.md)
+- [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md)
+- [Understanding metrics aggregation](../azure-monitor/essentials/metrics-aggregation-explained.md)
+- [Use diagnostic logs to monitor SignalR Service](signalr-howto-diagnostic-logs.md)
azure-signalr Signalr Howto Scale Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-scale-autoscale.md
description: Learn how to autoscale Azure SignalR Service.
- Previously updated : 02/11/2022 Last updated : 06/06/2022 # Automatically scale units of an Azure SignalR Service
-Autoscale allows you to have the right unit count to handle the load on your application. It allows you to add resources to handle increases in load and also save money by removing resources that are sitting idle. See [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md) to learn more about the Autoscale feature of Azure Monitor.
> [!IMPORTANT]
-> This article applies to only the **Premium** tier of Azure SignalR Service.
+> Autoscaling is only available in Azure SignalR Service Premium tier.
-By using the Autoscale feature for Azure SignalR Service, you can specify a minimum and maximum number of units and add or remove units automatically based on a set of rules.
+Azure SignalR Service Premium tier supports an *autoscale* feature, which is an implementation of [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md). Autoscale allows you to automatically scale the unit count for your SignalR Service to match the actual load on the service. Autoscale can help you optimize performance and cost for your application.
-For example, you can implement the following scaling scenarios using the Autoscale feature.
+Azure SignalR adds its own [service metrics](concept-metrics.md). However, most of the user interface is shared and common to other [Azure services that support autoscaling](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale). If you're new to the subject of Azure Monitor Metrics, review [Azure Monitor Metrics aggregation and display explained](../azure-monitor/essentials/metrics-aggregation-explained.md) before digging into SignalR Service Metrics.
-- Increase units when the Connection Quota Utilization above 70%. -- Decrease units when the Connection Quota Utilization below 20%. -- Use more units during business hours and fewer during off hours.
+## Understanding autoscale in SignalR Service
-This article shows you how you can automatically scale units in the Azure portal.
+Autoscale allows you to set conditions that will dynamically change the units allocated to SignalR Service while the service is running. Autoscale conditions are based on metrics, such as **Server Load**. Autoscale can also be configured to run on a schedule, such as every day between certain hours.
+For example, you can implement the following scaling scenarios using autoscale.
-## Autoscale setting page
-First, follow these steps to navigate to the **Scale out** page for your Azure SignalR Service.
+- Increase units when the **Connection Quota Utilization** above 70%.
+- Decrease units when the **Server Load** is below 20%.
+- Create a schedule to add more units during peak hours and reduce units during off hours.
-1. In your browser, open the [Azure portal](https://portal.azure.com).
+Multiple factors affect the performance of SignalR Service. No one metric provides a complete view of system performance. For example, if you're sending a large number of messages you might need to scale out even though the connection quota is relatively low. The combination of both **Connection Quota Utilization** and **Server Load** gives an indication of overall system load. The following guidelines apply.
-2. In your SignalR Service page, from the left menu, select **Scale out**.
+- Scale out if the connection count is over 80-90%. Scaling out before your connection count is exhausted ensures that you'll have sufficient buffer to accept new connections before scale-out takes effect.
+- Scale out if the **Server Load** is over 80-90%. Scaling early ensures that the service has enough capacity to maintain performance during the scale-out operation.
-3. Make sure the resource is in Premium Tier and you will see a **Custom autoscale** setting.
+The autoscale operation usually takes effect 3-5 minutes after it's triggered. It's important not to change the units too often. A good rule of thumb is to allow 30 minutes from the previous autoscale before performing another autoscale operation. In some cases, you might need to experiment to find the optimal autoscale interval.
+## Custom autoscale settings
-## Custom autoscale - Default condition
-You can configure automatic scaling of units by using conditions. This scale condition is executed when none of the other scale conditions match. You can set the default condition in one of the following ways:
+Open the autoscale settings page:
-- Scale based on a metric-- Scale to specific units
+1. Go to the [Azure portal](https://portal.azure.com).
+1. Open the **SignalR** service page.
+1. From the menu on the left, under **Settings** choose **Scale out**.
+1. Select the **Configure** tab. If you have a Premium tier SignalR instance, you'll see two options for **Choose how to scale your resource**:
+ - **Manual scale**, which lets you manually change the number of units.
+ - **Custom autoscale**, which lets you create autoscale conditions based on metrics and/or a time schedule.
-You can't set a schedule to autoscale on a specific days or date range for a default condition. This scale condition is executed when none of the other scale conditions with schedules match.
+1. Choose **Custom autoscale**. Use this page to manage the autoscale conditions for your Azure SignalR service.
+
+### Default scale condition
+
+When you open custom autoscale settings for the first time, you'll see the **Default** scale condition already created for you. This scale condition is executed when none of the other scale conditions match the criteria set for them. You can't delete the **Default** condition, but you can rename it, change the rules, and change the action taken by autoscale.
+
+You can't set the default condition to autoscale on a specific days or date range. The default condition only supports scaling to a unit range. To scale according to a schedule, you'll need to add a new scale condition.
+
+Autoscale doesn't take effect until you save the default condition for the first time after selecting **Custom autoscale**.
+
+## Add or change a scale condition
+
+There are two options for how to scale your Azure SignalR resource:
+
+- **Scale based on a metric** - Scale within unit limits based on a dynamic metric. One or more scale rules are defined to set the criteria used to evaluate the metric.
+- **Scale to specific units** - Scale to a specific number of units based on a date range or recurring schedule.
### Scale based on a metric
-The following procedure shows you how to add a condition to automatically increase units (scale out) when the Connection Quota Utilization is greater than 70% and decrease units (scale in) when the Connection Quota Utilization is less than 20%. Increments or decrements are done between available units.
-1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
-1. Select **Scale based on a metric** for **Scale mode**.
-1. Select **+ Add a rule**.
+The following procedure shows you how to add a condition to increase units (scale out) when the Connection Quota Utilization is greater than 70% and decrease units (scale in) when the Connection Quota Utilization is less than 20%. Increments or decrements are done between available units.
- :::image type="content" source="./media/signalr-howto-scale-autoscale/default-autoscale.png" alt-text="Default - scale based on a metric":::
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Scale based on a metric** for **Scale mode**.
+1. Select **+ Add a rule**.
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/default-autoscale.png" alt-text="Screenshot of custom rule based on a metric.":::
1. On the **Scale rule** page, follow these steps:
- 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
- 1. Select an operator and threshold values. In this example, they're **Greater than** and **70** for **Metric threshold to trigger scale action**.
- 1. Select an **operation** in the **Action** section. In this example, it's set to **Increase**.
+ 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
+ 1. Select an operator and threshold values. In this example, they're **Greater than** and **70** for **Metric threshold to trigger scale action**.
+ 1. Select an **operation** in the **Action** section. In this example, it's set to **Increase**.
1. Then, select **Add**
-
- :::image type="content" source="./media/signalr-howto-scale-autoscale/default-scale-out.png" alt-text="Default - scale out if Connection Quota Utilization is greater than 70%":::
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/default-scale-out.png" alt-text="Screenshot of default autoscale rule screen.":::
1. Select **+ Add a rule** again, and follow these steps on the **Scale rule** page:
- 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
- 1. Select an operator and threshold values. In this example, they're **Less than** and **20** for **Metric threshold to trigger scale action**.
- 1. Select an **operation** in the **Action** section. In this example, it's set to **Decrease**.
- 1. Then, select **Add**
+ 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
+ 1. Select an operator and threshold values. In this example, they're **Less than** and **20** for **Metric threshold to trigger scale action**.
+ 1. Select an **operation** in the **Action** section. In this example, it's set to **Decrease**.
+ 1. Then, select **Add**
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/default-scale-in.png" alt-text="Screenshot Connection Quota Utilization scale rule.":::
- :::image type="content" source="./media/signalr-howto-scale-autoscale/default-scale-in.png" alt-text="Default - scale in if Connection Quota Utilization is less than 20%":::
+1. Set the **minimum**, **maximum**, and **default** number of units.
+1. Select **Save** on the toolbar to save the autoscale setting.
-1. Set the **minimum** and **maximum** and **default** number of units.
+### Scale to specific units
-1. Select **Save** on the toolbar to save the autoscale setting.
-
-### Scale to specific number of units
-Follow these steps to configure the rule to scale to a specific units. Again, the default condition is applied when none of the other scale conditions match.
+Follow these steps to configure the rule to scale to a specific unit range.
-1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
-1. Select **Scale to a specific units** for **Scale mode**.
-1. For **Units**, select the number of default units.
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Scale to a specific units** for **Scale mode**.
+1. For **Units**, select the number of default units.
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/default-specific-units.png" alt-text="Screenshot of scale rule criteria.":::
- :::image type="content" source="./media/signalr-howto-scale-autoscale/default-specific-units.png" alt-text="Default - scale to specific units":::
+## Add more conditions
-## Custom autoscale - Additional conditions
-The previous section shows you how to add a default condition for the autoscale setting. This section shows you how to add more conditions to the autoscale setting. For these additional non-default conditions, you can set a schedule based on specific days of a week or a date range.
+The previous section showed you how to add a default condition for the autoscale setting. This section shows you how to add more conditions to the autoscale setting.
-### Scale based on a metric
-1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
-1. Select **Add a scale condition** under the **Default** block.
-
- :::image type="content" source="./media/signalr-howto-scale-autoscale/additional-add-condition.png" alt-text="Custom - add a scale condition link":::
-1. Confirm that the **Scale based on a metric** option is selected.
-1. Select **+ Add a rule** to add a rule to increase units when the **Connection Quota Utilization** goes above 70%. Follow steps from the [default condition](#custom-autoscaledefault-condition) section.
-5. Set the **minimum** and **maximum** and **default** number of units.
-6. You can also set a **schedule** on a custom condition (but not on the default condition). You can either specify start and end dates for the condition (or) select specific days (Monday, Tuesday, and so on.) of a week.
- 1. If you select **Specify start/end dates**, select the **Timezone**, **Start date and time** and **End date and time** (as shown in the following image) for the condition to be in effect.
- 1. If you select **Repeat specific days**, select the days of the week, timezone, start time, and end time when the condition should apply.
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Add a scale condition** under the **Default** block.
+ :::image type="content" source="./media/signalr-howto-scale-autoscale/additional-add-condition.png" alt-text="Screenshot of custom scale rule screen.":::
+1. Confirm that the **Scale based on a metric** option is selected.
+1. Select **+ Add a rule** to add a rule to increase units when the **Connection Quota Utilization** goes above 70%. Follow steps from the [default condition](#default-scale-condition) section.
+1. Set the **minimum** and **maximum** and **default** number of units.
+1. You can also set a **schedule** on a custom condition (but not on the default condition). You can either specify start and end dates for the condition (or) select specific days (Monday, Tuesday, and so on.) of a week.
+ 1. If you select **Specify start/end dates**, select the **Timezone**, **Start date and time** and **End date and time** (as shown in the following image) for the condition to be in effect.
+ 1. If you select **Repeat specific days**, select the days of the week, timezone, start time, and end time when the condition should apply.
+
+## Next steps
+
+For more information about managing autoscale from the Azure CLI, see [**az monitor autoscale**](/cli/azure/monitor/autoscale?view=azure-cli-latest&preserve-view=true).
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
Title: Deploy Azure Video Indexer with ARM template
-description: In this tutorial you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template.
+description: Learn how to create an Azure Video Indexer account by using Azure Resource Manager (ARM) template.
Last updated 05/23/2022
## Overview
-In this tutorial you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template (preview).
+In this tutorial, you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template (preview).
The resource will be deployed to your subscription and will create the Azure Video Indexer resource based on parameters defined in the avam.template file. > [!NOTE] > This sample is *not* for connecting an existing Azure Video Indexer classic account to an ARM-based Azure Video Indexer account. > For full documentation on Azure Video Indexer API, visit the [Developer portal](https://aka.ms/avam-dev-portal) page.
-> The current API Version is "2021-10-27-preview". Check this Repo from time to time to get updates on new API Versions.
+> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
## Prerequisites
The resource will be deployed to your subscription and will create the Azure Vid
* Create a new Resource group on the same location as your Azure Video Indexer account, using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. - ```powershell New-AzResourceGroup -Name myResourceGroup -Location eastus ```
The resource will be deployed to your subscription and will create the Azure Vid
``` > [!NOTE]
-> If you would like to work with bicep format, inspect the [bicep file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Quick-Start/avam.template.bicep) on this repo.
+> If you would like to work with bicep format, see [Deploy by using Bicep](./deploy-with-bicep.md).
## Parameters
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
+
+ Title: Deploy Azure Video Indexer by using Bicep
+description: Learn how to create an Azure Video Indexer account by using a Bicep file.
+ Last updated : 06/06/2022+++
+# Tutorial: deploy Azure Video Indexer by using Bicep
+
+In this tutorial, you create an Azure Video Indexer account by using [Bicep](../azure-resource-manager/bicep/overview.md).
+
+> [!NOTE]
+> This sample is *not* for connecting an existing Azure Video Indexer classic account to an ARM-based Azure Video Indexer account.
+> For full documentation on Azure Video Indexer API, visit the [Developer portal](https://aka.ms/avam-dev-portal) page.
+> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
+
+## Prerequisites
+
+* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](/azure/media-services/latest/account-create-how-to).
+
+## Review the Bicep file
+
+The Bicep file used in this tutorial is:
++
+One Azure resource is defined in the bicep file:
+
+* [Microsoft.videoIndexer/accounts](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep)
+
+Check [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) for more updated Bicep samples.
+
+## Deploy the sample
+
+1. Save the Bicep file as main.bicep to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters accountName=<account-name> managedIdentityResourceId=<managed-identity> mediaServiceAccountResourceId=<media-service-account-resource-id>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -accountName "<account-name>" -managedIdentityResourceId "<managed-identity>" -mediaServiceAccountResourceId "<media-service-account-resource-id>"
+ ```
+
+
+
+ The location must be the same location as the existing Azure media service. You need to provide values for the parameters:
+
+ * Replace **\<account-name\>** with the name of the new Azure video indexer account.
+ * Replace **\<managed-identity\>** with the managed identity used to grant access between Azure Media Services(AMS).
+ * Replace **\<media-service-account-resource-id\>** with the existing Azure media service.
+
+## Reference documentation
+
+If you're new to Azure Video Indexer, see:
+
+* [Azure Video Indexer Documentation](./index.yml)
+* [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/)
+* After completing this tutorial, head to other Azure Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md)
+
+If you're new to Bicep deployment, see:
+
+* [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
+* [Deploy Resources with Bicep and Azure PowerShell](../azure-resource-manager/bicep/deploy-powershell.md)
+* [Deploy Resources with Bicep and Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
+
+## Next steps
+
+[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Azure VMware Solution will apply important updates starting in March 2021. You'l
All new Azure VMware Solution private clouds in regions (East US2, Canada Central, North Europe, and Japan East), are now deployed in with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
-Any existing private clouds in the above mentioned regions will also be upgraded to these versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+Any existing private clouds in the above mentioned regions will also be upgraded to these versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
## May 23, 2022
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Title: Configure vRealize Operations for Azure VMware Solution
-description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud.
+description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud.
Last updated 04/11/2022
Last updated 04/11/2022
# Configure vRealize Operations for Azure VMware Solution
-vRealize Operations is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter Server, ESXi, NSX-T Data Center, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter Server, NSX-T Data Center, vSAN, and HCX deployment.
+vRealize Operations is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter Server, ESXi, NSX-T Data Center, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter Server, NSX-T Data Center, vSAN, and HCX deployment.
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the two typical deployment topologies:
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#pre
> * [vRealize Operations running on Azure VMware Solution deployment](#vrealize-operations-running-on-azure-vmware-solution-deployment) ## Before you begin
-* Review the [vRealize Operations Manager product documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) to learn more about deploying vRealize Operations.
+* Review the [vRealize Operations Manager product documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) to learn more about deploying vRealize Operations.
* Review the basic Azure VMware Solution Software-Defined Datacenter (SDDC) [tutorial series](tutorial-network-checklist.md).
-* Optionally, review the [vRealize Operations Remote Controller](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-263F9219-E801-4383-8A59-E84F3D01ED6B.html) product documentation for the on-premises vRealize Operations managing Azure VMware Solution deployment option.
+* Optionally, review the [vRealize Operations Remote Controller](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-263F9219-E801-4383-8A59-E84F3D01ED6B.html) product documentation for the on-premises vRealize Operations managing Azure VMware Solution deployment option.
## Prerequisites
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#pre
## On-premises vRealize Operations managing Azure VMware Solution deployment
-Most customers have an existing on-premise deployment of vRealize Operations to manage one or more on-premises vCenter Server domains. When they provision an Azure VMware Solution private cloud, they connect their on-premises environment with their private cloud using an Azure ExpressRoute or a Layer 3 VPN solution.
+Most customers have an existing on-premises deployment of vRealize Operations to manage one or more on-premises vCenter Server domains. When they provision an Azure VMware Solution private cloud, they connect their on-premises environment with their private cloud using an Azure ExpressRoute or a Layer 3 VPN solution.
:::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-1.png" alt-text="Diagram showing the on-premises vRealize Operations managing Azure VMware Solution deployment." border="false":::
-To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
+To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
> [!TIP]
-> Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) for step-by-step guide for installing vRealize Operations Manager.
+> Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) for step-by-step guide for installing vRealize Operations Manager.
## vRealize Operations running on Azure VMware Solution deployment
-Another option is to deploy an instance of vRealize Operations Manager on a vSphere cluster in the private cloud.
+Another option is to deploy an instance of vRealize Operations Manager on a vSphere cluster in the private cloud.
>[!IMPORTANT] >This option isn't currently supported by VMware. :::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-2.png" alt-text="Diagram showing the vRealize Operations running on Azure VMware Solution." border="false":::
-Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter Server, ESXi, NSX-T Data Center, vSAN, and HCX.
+Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter Server, ESXi, NSX-T Data Center, vSAN, and HCX.
Once the instance has been deployed, you can configure vRealize Operations to co
- The **cloudadmin@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-identity.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case. - Workload optimization for host-based business intent doesn't work because Azure VMware Solutions manage cluster configurations, including DRS settings. - Workload optimization for the cross-cluster placement within the SDDC using the cluster-based business intent is fully supported with vRealize Operations Manager 8.0 and onwards. However, workload optimization isn't aware of resource pools and places the VMs at the cluster level. A user can manually correct it in the Azure VMware Solution vCenter Server interface.-- You can't sign in to vRealize Operations Manager using your Azure VMware Solution vCenter Server credentials.
+- You can't sign in to vRealize Operations Manager using your Azure VMware Solution vCenter Server credentials.
- Azure VMware Solution doesn't support the vRealize Operations Manager plugin. When you connect the Azure VMware Solution vCenter to vRealize Operations Manager using a vCenter Server Cloud Account, you'll see a warning:
backup Backup Azure Arm Userestapi Createorupdatepolicy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-createorupdatepolicy.md
Title: Create backup policies using REST API description: In this article, you'll learn how to create and manage backup policies (schedule and retention) using REST API. Previously updated : 08/21/2018 Last updated : 06/13/2022 ms.assetid: 5ffc4115-0ae5-4b85-a18c-8a942f6d4870+++ # Create Azure Recovery Services backup policies using REST API
For the complete list of definitions in the request body, refer to the [backup p
### Example request body
+#### For Azure VM backup
+ The following request body defines a backup policy for Azure VM backups.
-The policy says:
+This policy:
-- Take a weekly backup every Monday, Wednesday, Thursday at 10:00 AM Pacific Standard Time.-- Retain the backups taken on every Monday, Wednesday, Thursday for one week.-- Retain the backups taken on every first Wednesday and third Thursday of a month for two months (overrides the previous retention conditions, if any).-- Retain the backups taken on fourth Monday and fourth Thursday in February and November for four years (overrides the previous retention conditions, if any).
+- Takes a weekly backup every Monday, Wednesday, Thursday at 10:00 AM Pacific Standard Time.
+- Retains the backups taken on every Monday, Wednesday, Thursday for one week.
+- Retains the backups taken on every first Wednesday and third Thursday of a month for two months (overrides the previous retention conditions, if any).
+- Retains the backups taken on fourth Monday and fourth Thursday in February and November for four years (overrides the previous retention conditions, if any).
```json {
The policy says:
> [!IMPORTANT] > The time formats for schedule and retention support only DateTime. They don't support Time format alone.
+#### For SQL in Azure VM backup
+
+The following is an example request body for SQL in Azure VM backup.
+
+This policy:
+
+- Takes a full backup everyday at 13:30 UTC and a log backup every 1 hour.
+- Retains the daily full backups for 30 days and log backups for 30 days as well.
+
+```json
+"properties": {
+ "backupManagementType": "AzureWorkload",
+ "workLoadType": "SQLDataBase",
+ "settings": {
+ "timeZone": "UTC",
+ "issqlcompression": false,
+ "isCompression": false
+ },
+ "subProtectionPolicy": [
+ {
+ "policyType": "Full",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Daily",
+ "scheduleRunTimes": [
+ "2022-02-14T13:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "dailySchedule": {
+ "retentionTimes": [
+ "2022-02-14T13:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ }
+ },
+ {
+ "policyType": "Log",
+ "schedulePolicy": {
+ "schedulePolicyType": "LogSchedulePolicy",
+ "scheduleFrequencyInMins": 60
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ }
+ ],
+ "protectedItemsCount": 0
+ }
+```
+
+The following is an example of a policy that takes a differential backup everyday and a full backup once a week.
+
+```json
+"properties": {
+ "backupManagementType": "AzureWorkload",
+ "workLoadType": "SQLDataBase",
+ "settings": {
+ "timeZone": "UTC",
+ "issqlcompression": false,
+ "isCompression": false
+ },
+ "subProtectionPolicy": [
+ {
+ "policyType": "Full",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Sunday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 104,
+ "durationType": "Weeks"
+ }
+ },
+ "monthlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Months"
+ }
+ },
+ "yearlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "monthsOfYear": [
+ "January"
+ ],
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 10,
+ "durationType": "Years"
+ }
+ }
+ }
+ },
+ {
+ "policyType": "Differential",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Monday",
+ "Tuesday",
+ "Wednesday",
+ "Thursday",
+ "Friday",
+ "Saturday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T02:00:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ },
+ {
+ "policyType": "Log",
+ "schedulePolicy": {
+ "schedulePolicyType": "LogSchedulePolicy",
+ "scheduleFrequencyInMins": 120
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 15,
+ "durationType": "Days"
+ }
+ }
+ }
+ ],
+ "protectedItemsCount": 0
+ }
+```
+
+#### For SAP HANA in Azure VM backup
+
+The following is an example request body for SQL in Azure VM backup.
+
+This policy:
+
+- Takes a full backup every day at 19:30 UTC and a log backup every 2 hours.
+- Retains the daily backups for 180 days.
+- Retains the weekly backups for 104 weeks.
+- Retains the monthly backups for 60 months.
+- Retains the yearly backups for 10 years.
+- Retains the log backups for 15 days.
+
+```json
+{
+ "properties": {
+ "backupManagementType": "AzureIaasVM",
+ "timeZone": "Pacific Standard Time",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunTimes": [
+ "2018-01-24T10:00:00Z"
+ ],
+ "scheduleRunDays": [
+ "Monday",
+ "Wednesday",
+ "Thursday"
+ ]
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Monday",
+ "Wednesday",
+ "Thursday"
+ ],
+ "retentionTimes": [
+ "2018-01-24T10:00:00Z"
+ ],
+ "retentionDuration": {
+ "count": 1,
+ "durationType": "Weeks"
+ }
+ },
+ "monthlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Wednesday",
+ "Thursday"
+ ],
+ "weeksOfTheMonth": [
+ "First",
+ "Third"
+ ]
+ },
+ "retentionTimes": [
+ "2018-01-24T10:00:00Z"
+ ],
+ "retentionDuration": {
+ "count": 2,
+ "durationType": "Months"
+ }
+ },
+ "yearlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "monthsOfYear": [
+ "February",
+ "November"
+ ],
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Monday",
+ "Thursday"
+ ],
+ "weeksOfTheMonth": [
+ "Fourth"
+ ]
+ },
+ "retentionTimes": [
+ "2018-01-24T10:00:00Z"
+ ],
+ "retentionDuration": {
+ "count": 4,
+ "durationType": "Years"
+ }
+ }
+ }
+ }
+}
+```
+
+The following is an example of a policy that takes a full backup once a week and an incremental backup once a day.
+
+```json
+
+"properties": {
+ "backupManagementType": "AzureWorkload",
+ "workLoadType": "SAPHanaDatabase",
+ "settings": {
+ "timeZone": "UTC",
+ "issqlcompression": false,
+ "isCompression": false
+ },
+ "subProtectionPolicy": [
+ {
+ "policyType": "Full",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Sunday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 104,
+ "durationType": "Weeks"
+ }
+ },
+ "monthlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Months"
+ }
+ },
+ "yearlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "monthsOfYear": [
+ "January"
+ ],
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 10,
+ "durationType": "Years"
+ }
+ }
+ }
+ },
+ {
+ "policyType": "Incremental",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Monday",
+ "Tuesday",
+ "Wednesday",
+ "Thursday",
+ "Friday",
+ "Saturday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T02:00:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ },
+ {
+ "policyType": "Log",
+ "schedulePolicy": {
+ "schedulePolicyType": "LogSchedulePolicy",
+ "scheduleFrequencyInMins": 120
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 15,
+ "durationType": "Days"
+ }
+ }
+ }
+ ],
+ "protectedItemsCount": 0
+}
+
+```
++
+#### For Azure File share backup
+
+The following is an example request body for Azure File share backup.
+
+This policy:
+
+- Takes a backup every day at 15:30 UTC.
+- Retains the daily backups for 30 days.
+- Retains the backups taken every Sunday for 12 weeks.
+
+```json
+"properties": {
+ "backupManagementType": "AzureStorage",
+ "workloadType": "AzureFileShare",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Daily",
+ "scheduleRunTimes": [
+ "2022-06-13T15:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "dailySchedule": {
+ "retentionTimes": [
+ "2022-06-13T15:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ },
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionTimes": [
+ "2022-06-13T15:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 12,
+ "durationType": "Weeks"
+ }
+ }
+ },
+ "timeZone": "UTC",
+ "protectedItemsCount": 0
+ }
+```
+++ ## Responses The backup policy creation/update is a [asynchronous operation](../azure-resource-manager/management/async-operations.md). It means this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created. Then 200 (OK) when that operation completes.
|Name |Type |Description | ||||
If a policy is already being used to protect an item, any update in the policy w
For more information on the Azure Backup REST APIs, see the following documents: - [Azure Recovery Services provider REST API](/rest/api/recoveryservices/)-- [Get started with Azure REST API](/rest/api/azure/)
+- [Get started with Azure REST API](/rest/api/azure/)
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
Last updated 09/27/2021
Microsoft Azure offers a cloud infrastructure with a wide range of integrated cloud services to meet your business needs. In some cases, though, you may need to run services on bare metal servers without a virtualization layer. You may need root access and control over the operating system (OS). To meet this need, Azure offers BareMetal Infrastructure for several high-value, mission-critical applications. BareMetal Infrastructure is made up of dedicated BareMetal instances (compute instances). It features:-- High-performance storage appropriate to the application (NFS, ISCSI, and Fiber Channel). Storage can also be shared across BareMetal instances to enable features like scale-out clusters or high availability pairs with STONITH. -- A set of function-specific virtual LANs (VLANs) in an isolated environment.
-
+- High-performance storage appropriate to the application (NFS, ISCSI, and Fiber Channel). Storage can also be shared across BareMetal instances to enable features like scale-out clusters or high availability pairs with STONITH.
+- A set of function-specific virtual LANs (VLANs) in an isolated environment.
+ This environment also has special VLANs you can access if you're running virtual machines (VMs) on one or more Azure Virtual Networks (VNets) in your Azure subscription. The entire environment is represented as a resource group in your Azure subscription. BareMetal Infrastructure is offered in over 30 SKUs from 2-socket to 24-socket servers and memory ranging from 1.5 TBs up to 24 TBs. A large set of SKUs is also available with Optane memory. Azure offers the largest range of bare metal instances in a hyperscale cloud.
-## Why BareMetal Infrastructure?
+## Why BareMetal Infrastructure?
Some workloads in the enterprise consist of technologies that just aren't designed to run in a typical virtualized cloud setting. They require special architecture, certified hardware, or extraordinarily large sizes. Although those technologies have the most sophisticated data protection and business continuity features, those features aren't built for the virtualized cloud. They're more sensitive to latencies and noisy neighbors and require more control over change management and maintenance activity. BareMetal Infrastructure is built, certified, and tested for a select set of such applications. Azure was the first to offer such solutions, and has since led with the largest portfolio and most sophisticated systems.
-### BareMetal benefits
+### BareMetal benefits
-BareMetal Infrastructure is intended for critical workloads that require certification to run your enterprise applications. The BareMetal instances are dedicated only to you, and you'll have full access (root access) to the operating system (OS). You manage OS and application installation according to your needs. For security, the instances are provisioned within your Azure Virtual Network (VNet) with no internet connectivity. Only services running on your virtual machines (VMs), and other Azure services in same Tier 2 network, can communicate with your BareMetal instances.
+BareMetal Infrastructure is intended for critical workloads that require certification to run your enterprise applications. The BareMetal instances are dedicated only to you, and you'll have full access (root access) to the operating system (OS). You manage OS and application installation according to your needs. For security, the instances are provisioned within your Azure Virtual Network (VNet) with no internet connectivity. Only services running on your virtual machines (VMs), and other Azure services in same Tier 2 network, can communicate with your BareMetal instances.
BareMetal Infrastructure offers these benefits:
BareMetal Infrastructure offers these benefits:
- Non-hypervised BareMetal instance, single tenant ownership - Low latency between Azure hosted application VMs to BareMetal instances (0.35 ms) - All Flash SSD and NVMe
- - Up to 1 PB/tenant
- - IOPS up to 1.2 million/tenant
+ - Up to 1 PB/tenant
+ - IOPS up to 1.2 million/tenant
- 40/100-GB network bandwidth - Accessible via NFS, ISCSI, and FC - Redundant power, power supplies, NICs, TORs, ports, WANs, storage, and management
BareMetal Infrastructure offers these benefits:
BareMetal Infrastructure offers multiple SKUs certified for specialized workloads. Use the workload-specific SKUs to meet your needs. -- Large instances ΓÇô Ranging from two-socket to four-socket systems. -- Very Large instances ΓÇô Ranging from 4-socket to 20-socket systems.
+- Large instances ΓÇô Ranging from two-socket to four-socket systems.
+- Very Large instances ΓÇô Ranging from 4-socket to 20-socket systems.
BareMetal Infrastructure for specialized workloads is available in the following Azure regions: - West Europe
BareMetal Infrastructure for specialized workloads is available in the following
>[!NOTE] >**Zones support** refers to availability zones within a region where BareMetal instances can be deployed across zones for high resiliency and availability. This capability enables support for multi-site active-active scaling.
-## Managing BareMetal instances in Azure
+## Managing BareMetal instances in Azure
-Depending on your needs, the application topologies of BareMetal Infrastructure can be complex. You may deploy multiple instances in one or more locations. The instances can have shared or dedicated storage, and specialized LAN and WAN connections. So for BareMetal Infrastructure, Azure offers a consultation by a CSA/GBB in the field to work with you.
+Depending on your needs, the application topologies of BareMetal Infrastructure can be complex. You may deploy multiple instances in one or more locations. The instances can have shared or dedicated storage, and specialized LAN and WAN connections. So for BareMetal Infrastructure, Azure offers a consultation by a CSA/GBB in the field to work with you.
By the time your BareMetal Infrastructure is provisioned, the OS, networks, storage volumes, placements in zones and regions, and WAN connections between locations have already been configured. You're set to register your OS licenses (BYOL), configure the OS, and install the application layer.
-You'll see all the BareMetal resources, and their state and attributes, in the Azure portal. You can also operate the instances and open service requests and support tickets from there.
+You'll see all the BareMetal resources, and their state and attributes, in the Azure portal. You can also operate the instances and open service requests and support tickets from there.
## Operational model
-BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
+BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
As soon as you receive root access and full control, you assume responsibility for: - Designing and implementing backup and recovery solutions, high availability, and disaster recovery. - Licensing, security, and support for the OS and third-party software. Microsoft is responsible for:-- Providing the hardware for specialized workloads.
+- Providing the hardware for specialized workloads.
- Provisioning the OS. :::image type="content" source="media/concepts-baremetal-infrastructure-overview/baremetal-support-model.png" alt-text="Diagram of BareMetal Infrastructure support model." border="false":::
Within the multi-tenant infrastructure of the BareMetal stamp, customers are dep
## Operating system
-During the provisioning of the BareMetal instance, you can select the OS you want to install on the machines.
+During the provisioning of the BareMetal instance, you can select the OS you want to install on the machines.
>[!NOTE] >Remember, BareMetal Infrastructure is a BYOL model.
The available Linux OS versions are:
## Storage
-BareMetal Infrastructure provides highly redundant NFS storage and Fiber Channel storage. The infrastructure offers deep integration for enterprise workloads like SAP, SQL, and more. It also provides application-consistent data protection and data-management capabilities. The self-service management tools offer space-efficient snapshot, cloning, and granular replication capabilities along with single pane of glass monitoring. The infrastructure enables zero RPO and RTO capabilities for data availability and business continuity needs.
+BareMetal Infrastructure provides highly redundant NFS storage and Fiber Channel storage. The infrastructure offers deep integration for enterprise workloads like SAP, SQL, and more. It also provides application-consistent data protection and data-management capabilities. The self-service management tools offer space-efficient snapshot, cloning, and granular replication capabilities along with single pane of glass monitoring. The infrastructure enables zero RPO and RTO capabilities for data availability and business continuity needs.
The storage infrastructure offers: - Up to 4 x 100-GB uplinks. - Up to 32-GB Fiber channel uplinks. - All flash SSD and NVMe drive. - Ultra-low latency and high throughput.-- Scales up to 4 PB of raw storage.
+- Scales up to 4 PB of raw storage.
- Up to 11 million IOPS.
-These Data access protocols are supported:
-- iSCSI -- NFS (v3 or v4) -- Fiber Channel -- NVMe over FC
+These Data access protocols are supported:
+- iSCSI
+- NFS (v3 or v4)
+- Fiber Channel
+- NVMe over FC
## Networking
The architecture of Azure network services is a key component for a successful d
- The ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or higher. - Extended Active Directory and DNS in Azure, or completely running in Azure.
-ExpressRoute lets you extend your on-premises network into the Microsoft cloud over a private connection with a connectivity provider's help. You can use **ExpressRoute Local** for cost-effective data transfer between your on-premises location and the Azure region you want. To extend connectivity across geopolitical boundaries, you can enable **ExpressRoute Premium**.
+ExpressRoute lets you extend your on-premises network into the Microsoft cloud over a private connection with a connectivity provider's help. You can use **ExpressRoute Local** for cost-effective data transfer between your on-premises location and the Azure region you want. To extend connectivity across geopolitical boundaries, you can enable **ExpressRoute Premium**.
BareMetal instances are provisioned within your Azure VNet server IP address range. :::image type="content" source="media/concepts-baremetal-infrastructure-overview/baremetal-infrastructure-diagram.png" alt-text="Architectural diagram of Azure BareMetal Infrastructure diagram." lightbox="media/concepts-baremetal-infrastructure-overview/baremetal-infrastructure-diagram.png" border="false"::: The architecture shown is divided into three sections:-- **Left:** Shows the customer on-premise infrastructure that runs different applications, connecting through the partner or local edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../expressroute/expressroute-locations.md).
+- **Left:** Shows the customer on-premises infrastructure that runs different applications, connecting through the partner or local edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../expressroute/expressroute-locations.md).
- **Center:** Shows [ExpressRoute](../expressroute/expressroute-introduction.md) provisioned using your Azure subscription offering connectivity to Azure edge network. - **Right:** Shows Azure IaaS, and in this case, use of VMs to host your applications, which are provisioned within your Azure virtual network.-- **Bottom:** Shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
+- **Bottom:** Shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
>[!TIP] >To support this, your ExpressRoute Gateway should be UltraPerformance. For more information, see [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/available-sizes.md
This article describes the available virtual machine sizes for Cloud Services (e
||| |[Av2](../virtual-machines/av2-series.md) | 100 | |[D](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#d-series) | 160 |
-|[Dv2](../virtual-machines/dv2-dsv2-series.md) | 160 - 190* |
+|[Dv2](../virtual-machines/dv2-dsv2-series.md) | 210 - 250* |
|[Dv3](../virtual-machines/dv3-dsv3-series.md) | 160 - 190* | |[Dav4](../virtual-machines/dav4-dasv4-series.md) | 230 - 260 | |[Eav4](../virtual-machines/eav4-easv4-series.md) | 230 - 260 |
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
Title: Install Read OCR Docker containers from Computer Vision
description: Use the Read OCR Docker containers from Computer Vision to extract text from images and documents, on-premises. -+ Previously updated : 10/14/2021- Last updated : 06/13/2022+ keywords: on-premises, OCR, Docker, container
keywords: on-premises, OCR, Docker, container
[!INCLUDE [container hosting on the Microsoft Container Registry](../containers/includes/gated-container-hosting.md)]
-Containers enable you to run the Computer Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run Computer Vision containers.
+Containers enable you to run the Computer Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run the Read (OCR) container.
-The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
+The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
## What's new The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you are an existing customer, please follow the [download instructions](#docker-pull-for-the-read-ocr-container) to get started. ## Read 3.2 container
-The Read 3.2 OCR container latest GA model provides:
+The Read 3.2 OCR container is the latest GA model and provides:
* New models for enhanced accuracy. * Support for multiple languages within the same document. * Support for a total of 164 languages. See the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
You must meet the following prerequisites before using the containers:
|Required|Purpose| |--|--|
-|Docker Engine| You need the Docker Engine installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/install/#server). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
+|Docker Engine| You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/install/#server). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
|Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.| |Computer Vision resource |In order to use the container, you must have:<br><br>An Azure **Computer Vision** resource and the associated API key the endpoint URI. Both values are available on the Overview and Keys pages for the resource and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page|
Fill out and submit the [request form](https://aka.ms/csgate) to request approva
[!INCLUDE [Gathering required container parameters](../containers/includes/container-gathering-required-parameters.md)]
-### The host computer
+### Host computer requirements
[!INCLUDE [Host Computer requirements](../../../includes/cognitive-services-containers-host-computer.md)]
docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview
## How to use the container
-Once the container is on the [host computer](#the-host-computer), use the following process to work with the container.
+Once the container is on the [host computer](#host-computer-requirements), use the following process to work with the container.
1. [Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](computer-vision-resource-container-config.md) of the `docker run` command are available. 1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
Use bind mounts to read and write data to and from the container. You can specif
The Computer Vision containers don't use input or output mounts to store training or service data.
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](computer-vision-how-to-install-containers.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](computer-vision-how-to-install-containers.md#host-computer-requirements)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
|Optional| Name | Data type | Description | |-||--|-|
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describing-images.md
Previously updated : 01/05/2022 Last updated : 06/13/2022
At this time, English is the only supported language for image description.
## Image description example
-The following JSON response illustrates what Computer Vision returns when describing the example image based on its visual features.
+The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png)
The following JSON response illustrates what Computer Vision returns when descri
The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"description"` section.
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
+* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
## Next steps
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Previously updated : 01/05/2022 Last updated : 06/13/2022
-# Face detection with Computer Vision
+# Face detection with Image Analysis
-Computer Vision can detect human faces within an image and generate rectangle coordinates for each detected face.
+Image Analysis can detect human faces within an image and generate rectangle coordinates for each detected face.
> [!NOTE]
-> This feature is also offered by the Azure [Face](./index-identity.yml) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
+> This feature is also offered by the dedicated [Face](./overview-identity.md) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
## Face detection examples
-The following example demonstrates the JSON response returned by Computer Vision for an image containing a single human face.
+The following example demonstrates the JSON response returned by Analyze API for an image containing a single human face.
![Vision Analyze Woman Roof Face](./Images/woman_roof_face.png)
cognitive-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md
Previously updated : 10/27/2021 Last updated : 06/13/2022
cognitive-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-recognition.md
Previously updated : 10/27/2021 Last updated : 06/13/2022
cognitive-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection.md
Previously updated : 10/27/2021 Last updated : 06/13/2022
The Detect API applies tags based on the objects or living things identified in
## Object detection example
-The following JSON response illustrates what Computer Vision returns when detecting objects in the example image.
+The following JSON response illustrates what the Analyze API returns when detecting objects in the example image.
![A woman using a Microsoft Surface device in a kitchen](./Images/windows-kitchen.jpg)
cognitive-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-tagging-images.md
Previously updated : 01/05/2022 Last updated : 06/13/2022 # Apply content tags to images
-Computer Vision can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tags are not organized as a taxonomy and do not have inheritance hierarchies. A collection of content tags forms the foundation for an image [description](./concept-describing-images.md) displayed as human readable language formatted in complete sentences. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
+Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tags are not organized as a taxonomy and do not have inheritance hierarchies. A collection of content tags forms the foundation for an image [description](./concept-describing-images.md) displayed as human readable language formatted in complete sentences. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
-After you upload an image or specify an image URL, the Computer Vision algorithm can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
+After you upload an image or specify an image URL, the Analyze API can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
## Image tagging example
The following JSON response illustrates what Computer Vision returns when taggin
The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"tags"` section.
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
+* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
## Next steps
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md
Previously updated : 02/05/2022 Last updated : 06/13/2022
cognitive-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-detect-faces.md
Previously updated : 08/04/2021 Last updated : 06/13/2022 ms.devlang: csharp
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
Previously updated : 10/06/2021 Last updated : 06/13/2022 # What is Spatial Analysis?
-You can use Computer Vision Spatial Analysis to ingest streaming video from cameras, extract insights, and generate events to be used by other systems. The service detects the presence and movements of people in video. It can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you are able to learn how people use them and maximize the space's value to your organization.
+You can use Computer Vision Spatial Analysis to ingest streaming video from cameras, extract insights, and generate events to be used by other systems. The service detects the presence and movements of people in video. It can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you're able to learn how people use them and maximize the space's value to your organization.
<!--This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
This feature monitors how long people stay in an area or when they enter through
![Spatial Analysis measures dwelltime in checkout queue](https://user-images.githubusercontent.com/11428131/137016574-0d180d9b-fb9a-42a9-94b7-fbc0dbc18560.gif) ### Social distancing and facemask detection
-This feature analyzes how well people follow social distancing requirements in a space. Using the PersonDistance operation, the system automatically calibrates itself as people walk around in the space. Then it identifies when people violate a specific distance threshold (6 ft. or 10 ft.).
+This feature analyzes how well people follow social distancing requirements in a space. The system uses the PersonDistance operation to automatically calibrates itself as people walk around in the space. Then it identifies when people violate a specific distance threshold (6 ft. or 10 ft.).
![Spatial Analysis visualizes social distance violation events showing lines between people showing the distance](https://user-images.githubusercontent.com/11428131/139924062-b5e10c0f-3cf8-4ff1-bb58-478571c022d7.gif)
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
Previously updated : 02/28/2022 Last updated : 06/13/2022 keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
This documentation contains the following types of articles:
## Example use cases
-**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
+**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or to recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
**Touchless access control**: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
Previously updated : 06/21/2021 Last updated : 06/13/2022 keywords: computer vision, computer vision applications, computer vision service
Generate a description of an entire image in human-readable language, using comp
### Detect faces
-Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/>Computer Vision provides a subset of the [Face](./index-identity.yml) service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection. [Detect faces](concept-detecting-faces.md)
+Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face. [Detect faces](concept-detecting-faces.md)
+
+You can also use the dedicated [Face API](./index-identity.yml) for these purposes. It provides more detailed analysis, such as facial identification and pose detection.
### Detect image types
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Previously updated : 02/05/2022 Last updated : 06/13/2022
The Computer Vision [Read API](https://westus.dev.cognitive.microsoft.com/docs/s
The **Read** call takes images and documents as its input. They have the following requirements: * Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
-* For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed.
-* The file size must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
-* The minimum height of the text to be extracted is 12 pixels for a 1024X768 image. This corresponds to about 8 font point text at 150 DPI.
+* For PDF and TIFF files, up to 2000 pages (only the first two pages for the free tier) are processed.
+* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a size limit.
+* The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI.
## Supported languages The Read API latest generally available (GA) model supports 164 languages for print text and 9 languages for handwritten text.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md
Previously updated : 02/28/2022 Last updated : 06/13/2022 keywords: computer vision, computer vision applications, computer vision service
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
Previously updated : 03/02/2022 Last updated : 06/13/2022 ms.devlang: csharp, golang, java, javascript, python
keywords: computer vision, computer vision service
# Quickstart: Use the Optical character recognition (OCR) client library or REST API
-Get started with the Computer Vision Read REST API or client libraries. The Read service provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
+Get started with the Computer Vision Read REST API or client libraries. The Read API provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
::: zone pivot="programming-language-csharp"
cognitive-services Identity Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/identity-client-library.md
zone_pivot_groups: programming-languages-set-face
Previously updated : 09/27/2021 Last updated : 06/13/2022 ms.devlang: csharp, golang, javascript, python
cognitive-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library.md
Previously updated : 07/30/2021 Last updated : 06/13/2022 ms.devlang: csharp, golang, java, javascript, python
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
Previously updated : 10/14/2021 Last updated : 06/13/2022
The Spatial Analysis container enables you to analyze real-time streaming video
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * [!INCLUDE [contributor-requirement](../includes/quickstarts/contributor-requirement.md)]
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
- * You will need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later.
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later.
### Spatial Analysis container requirements
-To run the Spatial Analysis container, you need a compute device with an NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We will refer to this device as the host computer.
+To run the Spatial Analysis container, you need a compute device with an NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We'll refer to this device as the host computer.
#### [Azure Stack Edge device](#tab/azure-stack-edge)
Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge comp
#### Minimum hardware requirements
-* 4 GB system RAM
+* 4 GB of system RAM
* 4 GB of GPU RAM * 8 core CPU
-* 1 NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
+* One NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
* 20 GB of HDD space #### Recommended hardware
-* 32 GB system RAM
+* 32 GB of system RAM
* 16 GB of GPU RAM * 8 core CPU
-* 2 NVIDIA CUDA Compute Capable GPUs 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
+* Two NVIDIA CUDA Compute Capable GPUs 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
* 50 GB of SSD space
-In this article, you will download and install the following software packages. The host computer must be able to run the following (see below for instructions):
+In this article, you'll download and install the following software packages. The host computer must be able to run the following (see below for instructions):
* [NVIDIA graphics drivers](https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/https://docsupdatetracker.net/index.html) and [NVIDIA CUDA Toolkit](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html). The minimum GPU driver version is 460 with CUDA 11.1. * Configurations for [NVIDIA MPS](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf) (Multi-Process Service).
In this article, you will download and install the following software packages.
* [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) runtime. #### [Azure VM with GPU](#tab/virtual-machine)
-In our example, we will utilize an [NC series VM](../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) that has one K80 GPU.
+In our example, we'll utilize an [NC series VM](../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) that has one K80 GPU.
| Requirement | Description | |--|--|
-| Camera | The Spatial Analysis container is not tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
+| Camera | The Spatial Analysis container isn't tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
| Linux OS | [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) must be installed on the host computer. | ## Set up the host computer
-It is recommended that you use an Azure Stack Edge device for your host computer. Click **Desktop Machine** if you're configuring a different device, or **Virtual Machine** if you're utilizing a VM.
+We recommend that you use an Azure Stack Edge device for your host computer. Select **Desktop Machine** if you're configuring a different device, or **Virtual Machine** if you're utilizing a VM.
#### [Azure Stack Edge device](#tab/azure-stack-edge)
Spatial Analysis uses the compute features of the Azure Stack Edge to run an AI
* You have a Windows client system running PowerShell 5.0 or later, to access the device. * To deploy a Kubernetes cluster, you need to configure your Azure Stack Edge device via the **Local UI** on the [Azure portal](https://portal.azure.com/): 1. Enable the compute feature on your Azure Stack Edge device. To enable compute, go to the **Compute** page in the web interface for your device.
- 2. Select a network interface that you want to enable for compute, then click **Enable**. This will create a virtual switch on your device, on that network interface.
+ 2. Select a network interface that you want to enable for compute, then select **Enable**. This will create a virtual switch on your device, on that network interface.
3. Leave the Kubernetes test node IP addresses and the Kubernetes external services IP addresses blank.
- 4. Click **Apply**. This operation may take about two minutes.
+ 4. Select **Apply**. This operation may take about two minutes.
![Configure compute](media/spatial-analysis/configure-compute.png)
-### Set up an Edge compute role and create an IoT Hub resource
+### Set up Azure Stack Edge role and create an IoT Hub resource
-In the [Azure portal](https://portal.azure.com/), navigate to your Azure Stack Edge resource. On the **Overview** page or navigation list, click the Edge compute **Get started** button. In the **Configure Edge compute** tile, click **Configure**.
+In the [Azure portal](https://portal.azure.com/), navigate to your Azure Stack Edge resource. On the **Overview** page or navigation list, select the Edge compute **Get started** button. In the **Configure Edge compute** tile, select **Configure**.
![Link](media/spatial-analysis/configure-edge-compute-tile.png) In the **Configure Edge compute** page, choose an existing IoT Hub, or choose to create a new one. By default, a Standard (S1) pricing tier is used to create an IoT Hub resource. To use a free tier IoT Hub resource, create one and then select it. The IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource
-Click **Create**. The IoT Hub resource creation may take a couple of minutes. After the IoT Hub resource is created, the **Configure Edge compute** tile will update to show the new configuration. To confirm that the Edge compute role has been configured, select **View config** on the **Configure compute** tile.
+Select **Create**. The IoT Hub resource creation may take a couple of minutes. After the IoT Hub resource is created, the **Configure Edge compute** tile will update to show the new configuration. To confirm that the Edge compute role has been configured, select **View config** on the **Configure compute** tile.
When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. The Azure IoT Edge Runtime will already be running on the IoT Edge device.
sudo az iot hub create --name "<iothub-group-name>" --sku S1 --resource-group "<
sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled ```
-You will need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
+You'll need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
Ubuntu Server 18.04: ```bash
Then, select either **NC6** or **NC6_Promo**.
:::image type="content" source="media/spatial-analysis/promotional-selection.png" alt-text="promotional selection" lightbox="media/spatial-analysis/promotional-selection.png":::
-Next, Create the VM. Once created, navigate to the VM resource in the Azure portal and select `Extensions` from the left pane. Click on "Add" to bring up the extensions window with all available extensions. Search for and select `NVIDIA GPU Driver Extension`, click create, and complete the wizard.
+Next, Create the VM. Once created, navigate to the VM resource in the Azure portal and select `Extensions` from the left pane. Select on "Add" to bring up the extensions window with all available extensions. Search for and select `NVIDIA GPU Driver Extension`, click create, and complete the wizard.
Once the extension is successfully applied, navigate to the VM main page in the Azure portal and click `Connect`. The VM can be accessed either through SSH or RDP. RDP will be helpful as it will be enable viewing of the visualizer window (explained later). Configure the RDP access by following [these steps](../../virtual-machines/linux/use-remote-desktop.md) and opening a remote desktop connection to the VM.
sudo az iot hub create --name "<iothub-name>" --sku S1 --resource-group "<resour
sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled ```
-You will need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
+You'll need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
Ubuntu Server 18.04: ```bash
Once the deployment is complete and the container is running, the **host compute
## Configure the operations performed by Spatial Analysis
-You will need to use [Spatial Analysis operations](spatial-analysis-operations.md) to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for Spatial Analysis will generate an output stream of JSON messages, sent to your instance of Azure IoT Hub.
+You'll need to use [Spatial Analysis operations](spatial-analysis-operations.md) to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for Spatial Analysis will generate an output stream of JSON messages, sent to your instance of Azure IoT Hub.
## Use the output generated by the container If you want to start consuming the output generated by the container, see the following articles:
-* Use the Azure Event Hub SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. See [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md) for more information.
+* Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md).
* Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information. ## Running Spatial Analysis with a recorded video file
You can use Spatial Analysis with both recorded or live video. To use Spatial An
Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported).
-Click on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
+Select on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
The Spatial Analysis module will start consuming video file and will continuousl
## Troubleshooting
-If you encounter issues when starting or running the container, see [telemetry and troubleshooting](spatial-analysis-logging.md) for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
+If you encounter issues when starting or running the container, see [Telemetry and troubleshooting](spatial-analysis-logging.md) for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
[!INCLUDE [Diagnostic container](../containers/includes/diagnostics-container.md)]
If you encounter issues when starting or running the container, see [telemetry a
The Spatial Analysis container sends billing information to Azure, using a Computer Vision resource on your Azure account. The use of Spatial Analysis in public preview is currently free.
-Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
+Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Cognitive Services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
## Summary
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
The Computer Vision API v3.2 is now generally available with the following updat
* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more. * Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more. * [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
-* [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
+* [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment.
> [!div class="nextstepaction"] > [See Computer Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) ### PersonDirectory data structure
-* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
+* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
* Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](how-to/use-persondirectory.md). ## March 2021
The Computer Vision Read API v3.2 public preview, available as cloud service and
* Natural reading order for the text line output (Latin languages only) * Handwriting style classification for text lines along with a confidence score (Latin languages only). * Extract text only for selected pages for a multi-page document.
-* Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
+* Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment.
See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
A new version of the [Spatial Analysis container](spatial-analysis-container.md)
## November 2020 ### Sample Face enrollment app
-* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
+* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
## October 2020
Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-s
## April 2019 ### Improved attribute accuracy
-* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
### Improved processing speeds * Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
Previously updated : 01/12/2022 Last updated : 06/13/2022 keywords: image recognition, image recognition app, custom vision
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
Previously updated : 02/02/2022 Last updated : 06/13/2022 keywords: image recognition, image recognition app, custom vision
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
Previously updated : 02/28/2022 Last updated : 06/13/2022 keywords: image recognition, image identifier, image recognition app, custom vision
cognitive-services Image Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/quickstarts/image-classification.md
Previously updated : 09/28/2021 Last updated : 06/13/2022 ms.devlang: csharp, golang, java, javascript, python keywords: custom vision, image recognition, image recognition app, image analysis, image recognition software
cognitive-services Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/quickstarts/object-detection.md
Previously updated : 09/28/2021 Last updated : 06/13/2022 ms.devlang: csharp, golang, java, javascript, python keywords: custom vision
cognitive-services Select Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/select-domain.md
Title: "Select a domain for a Custom Vision project - Computer Vision"
description: This article will show you how to select a domain for your project in the Custom Vision Service. -+ Previously updated : 01/05/2022- Last updated : 06/13/2022+ # Select a domain for a Custom Vision project
cognitive-services Use Prediction Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/use-prediction-api.md
Previously updated : 10/27/2021 Last updated : 06/13/2022 ms.devlang: csharp
After you've trained your model, you can test images programmatically by submitting them to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs. - > [!NOTE] > This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
cognitive-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-speech-recognition-results.md
Previously updated : 03/31/2022 Last updated : 06/13/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python zone_pivot_groups: programming-languages-speech-sdk-cli
cognitive-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md
Previously updated : 02/11/2022 Last updated : 06/13/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
cognitive-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-translation.md
Previously updated : 06/07/2022 Last updated : 06/13/2022 zone_pivot_groups: programming-languages-speech-services keywords: speech translation
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
Previously updated : 01/08/2022 Last updated : 06/13/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
Previously updated : 05/31/2022 Last updated : 06/13/2022 zone_pivot_groups: programming-languages-speech-sdk
cognitive-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-audio-input-streams.md
Previously updated : 07/05/2019 Last updated : 06/13/2022 ms.devlang: csharp
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
Previously updated : 02/13/2022 Last updated : 06/13/2022 zone_pivot_groups: programming-languages-speech-services-nomore-variant
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Previously updated : 02/02/2022 Last updated : 06/13/2022
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
Previously updated : 01/16/2022 Last updated : 06/13/2022
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
Previously updated : 01/16/2022 Last updated : 06/13/2022 keywords: speech to text, speech to text software
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md
Previously updated : 01/16/2022 Last updated : 06/13/2022 keywords: speech translation
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Previously updated : 01/16/2022 Last updated : 06/13/2022 keywords: text to speech
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 06/07/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
Previously updated : 06/07/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md
Previously updated : 06/07/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Migrate Qnamaker To Question Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker-to-question-answering.md
+
+ Title: Migrate from QnA Maker to Question Answering
+description: Details on features, requirements, and examples for migrating from QnA Maker to Question Answering
+++ Last updated : 6/9/2022++
+# Migrate from QnA Maker to Question Answering
+
+**Purpose of this document:** This article aims to provide information that can be used to successfully migrate applications that use QnA Maker to Question Answering. Using this article, we hope customers will gain clarity on the following:
+
+ - Comparison of features across QnA Maker and Question Answering
+ - Pricing
+ - Simplified Provisioning and Development Experience
+ - Migration phases
+ - Common migration scenarios
+ - Migration steps
+
+**Intended Audience:** Existing QnA Maker customers
+
+> [!IMPORTANT]
+> Question Answering, a feature of Azure Cognitive Service for Language was introduced in November 2021 with several new capabilities including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration:
+>
+> - Automatic RBAC to Language project (not resource)
+> - Automatic enabling of analytics.
+
+You will also need to [re-enable analytics](analytics.md) for the language resource.
+
+## Comparison of features
+
+In addition to a new set of features, Question Answering provides many technical improvements to common features.
+
+|Feature|QnA Maker|Question Answering|Details|
+|-|||-|
+|State of the art transformer-based models|➖|✔️|Turing based models which enables to search QnA at web scale.|
+|Pre-built capability|➖|✔️|Using this capability one can leverage the power of question answering without having to ingest content and manage resources.|
+|Precise answering|➖|✔️|Question Answering supports precise answering with the help of SOTA models.|
+|Smart URL Refresh|➖|✔️|Question Answering provides a means to refresh ingested content from public sources with a single click.|
+|Q&A over knowledge base (hierarchical extraction)|✔️|✔️| |
+|Active learning|✔️|✔️|Question Answering has an improved active learning model.|
+|Alternate Questions|✔️|✔️|The improved models in question answering reduces the need to add alternate questions.|
+|Synonyms|✔️|✔️| |
+|Metadata|✔️|✔️| |
+|Question Generation (private preview)|➖|✔️|This new feature will allow generation of questions over text.|
+|Support for unstructured documents|➖|✔️|Users can now ingest unstructured documents as input sources and query the content for responses|
+|.NET SDK|✔️|✔️| |
+|API|✔️|✔️| |
+|Unified Authoring experience|➖|✔️|A single authoring experience across all Azure Cognitive Services for Language|
+|Multi region support|➖|✔️|
+
+## Pricing
+
+When you are looking at migrating to Question Answering, please consider the following:
+
+- Knowledge base/project content or size has no implications on pricing
+
+- ΓÇ£Text RecordsΓÇ¥ in Question Answering features refer to the query submitted by the user to the runtime, and it is a concept common to all features within Language Service
+
+Here you can find the pricing details for [Question Answering](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) and [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/).
+
+The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) can provide even more detail.
+
+## Simplified Provisioning and Development Experience
+
+With the Language service, QnA Maker customers now benefit from a single service that provides Text Analytics, LUIS and Question Answering as features of the language resource. The Language service provides:
+
+- One Language resource to access all above capabilities
+- A single pane of authoring experience across capabilities
+- A unified set of APIs across all the capabilities
+- A cohesive, simpler, and powerful product
+
+Learn how to get started in [Language Studio](../../language-studio.md)
+
+## Migration Phases
+
+If you or your organization have applications in development or production that use QnA Maker, you should update them to use Question Answering as soon as possible. See the following links for available APIs, SDKs, Bot SDKs and code samples.
+
+Following are the broad migration phases to consider:
+
+![A chart showing the phases of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-phases.png)
+
+Additional links which can help you are given below:
+- [Authoring portal](https://language.cognitive.azure.com/home)
+- [API](authoring.md)
+- [SDK](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker)
+- Bot SDK: For bots to use custom question answering, use the [Bot.Builder.AI.QnA](https://www.nuget.org/packages/Microsoft.Bot.Builder.AI.QnA/) SDK ΓÇô We recommend customers to continue to use this for their Bot integrations. Here are some sample usages of the same in the botΓÇÖs code: [Sample 1](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/48.customQABot-all-features) [Sample 2](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot)
+
+## Common migration scenarios
+
+This topic compares two hypothetical scenarios when migrating from QnA Maker to Question Answering. These scenarios can help you to determine the right set of migration steps to execute for the given scenario.
+
+> [!NOTE]
+> An attempt has been made to ensure these scenarios are representative of real customer migrations, however, individual customer scenarios will of course differ. Also, this article doesn't include pricing details. Click here for [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/)
+
+> [!IMPORTANT]
+> Each question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You will also need to [re-enable analytics](analytics.md) for the language resource.
+
+### Migration scenario 1: No custom authoring portal
+
+In the first migration scenario, the customer uses qnamaker.ai as the authoring portal and they want to migrate their QnA Maker knowledge bases to Custom Question Answering.
+
+[Migrate your project from QnA Maker to Question Answering](migrate-qnamaker.md)
+
+Once migrated to Question Answering:
+
+- The resource level settings need to be reconfigured for the language resource
+- Customer validations should start on the migrated knowledge bases on:
+ - Size validation
+ - Number of QnA pairs in all KBs to match pre and post migration
+- Customers need to establish new thresholds for their knowledge bases in custom question answering as the Confidence score mapping is different when compared to QnA Maker.
+ - Answers for sample questions in pre and post migration
+ - Response time for Questions answered in v1 vs v2
+ - Retaining of prompts
+ - Customers can use the batch testing tool post migration to test the newly created project in custom question answering.
+
+Old QnA Maker resources need to be manually deleted.
+
+Here are some [detailed steps](migrate-qnamaker.md) on migration scenario 1.
+
+### Migration scenario 2
+
+In this migration scenario, the customer may have created their own authoring frontend leveraging the QnA Maker authoring APIs or QnA Maker SDKs.
+
+They should perform these steps required for migration of SDKs:
+
+This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, [Azure.AI.Language.QuestionAnswering](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering), from the old one, [Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker). It will focus on side-by-side comparisons for similar operations between the two packages.
+
+They should perform the steps required for migration of Knowledge bases to the new Project within Language resource.
+
+Once migrated to Question Answering:
+- The resource level settings need to be reconfigured for the language resource
+- Customer validations should start on the migrated knowledge bases on
+ - Size validation
+ - Number of QnA pairs in all KBs to match pre and post migration
+ - Confidence score mapping
+ - Answers for sample questions in pre and post migration
+ - Response time for Questions answered in v1 vs v2
+ - Retaining of prompts
+ - Batch testing pre and post migration
+- Old QnA Maker resources need to be manually deleted.
+
+Additionally, for customers who have to migrate and upgrade Bot, upgrade bot code is published as NuGet package.
+
+Here you can find some code samples: [Sample 1](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/48.customQABot-all-features) [Sample 2](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot)
+
+Here are [detailed steps on migration scenario 2](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md)
+
+Learn more about the [pre-built API](../../../QnAMaker/How-To/using-prebuilt-api.md)
+
+Learn more about the [Question Answering Get Answers REST API](https://docs.microsoft.com/rest/api/cognitiveservices/questionanswering/question-answering/get-answers)
+
+## Migration steps
+
+Please note that some of these steps are needed depending on the customers existing architecture. Kindly look at migration phases given above for getting more clarity on which steps are needed by you for migration.
+
+![A chart showing the steps of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-steps.png)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Previously updated : 06/06/2022 Last updated : 06/13/2022 ms.devlang: csharp, java, javascript, python
confidential-computing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview.md
Confidential computing allows you to isolate your sensitive data while it's bein
> [!VIDEO https://www.youtube.com/embed/rT6zMOoLEqI]
-We know that securing your cloud data is important. We hear your concerns. Here's just a few questions that our customers may have when moving sensitive workloads to the cloud:
+We know that securing your cloud data is important. We hear your concerns. Here's just a few questions that our customers may have when moving sensitive workloads to the cloud:
- How do I make sure Microsoft can't access data that isn't encrypted? - How do I prevent security threats from privileged admins inside my company?
We know that securing your cloud data is important. We hear your concerns. Here'
Azure helps you minimize your attack surface to gain stronger data protection. Azure already offers many tools to safeguard [**data at rest**](../security/fundamentals/encryption-atrest.md) through models such as client-side encryption and server-side encryption. Additionally, Azure offers mechanisms to encrypt [**data in transit**](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit) through secure protocols like TLS and HTTPS. This page introduces a third leg of data encryption - the encryption of **data in use**.
-## Introduction to confidential computing
+## Introduction to confidential computing
Confidential computing is an industry term defined by the [Confidential Computing Consortium](https://confidentialcomputing.io/) (CCC) - a foundation dedicated to defining and accelerating the adoption of confidential computing. The CCC defines confidential computing as: The protection of data in use by performing computations in a hardware-based Trusted Execution Environment (TEE).
-A TEE is an environment that enforces execution of only authorized code. Any data in the TEE can't be read or tampered with by any code outside that environment.
+A TEE is an environment that enforces execution of only authorized code. Any data in the TEE can't be read or tampered with by any code outside that environment.
### Lessen the need for trust Running workloads on the cloud requires trust. You give this trust to various providers enabling different components of your application.
Running workloads on the cloud requires trust. You give this trust to various pr
**App software vendors**: Trust software by deploying on-prem, using open-source, or by building in-house application software.
-**Hardware vendors**: Trust hardware by using on-premise hardware or in-house hardware.
+**Hardware vendors**: Trust hardware by using on-premises hardware or in-house hardware.
-**Infrastructure providers**: Trust cloud providers or manage your own on-premise data centers.
+**Infrastructure providers**: Trust cloud providers or manage your own on-premises data centers.
Azure confidential computing makes it easier to trust the cloud provider, by reducing the need for trust across various aspects of the compute cloud infrastructure. Azure confidential computing minimizes trust for the host OS kernel, the hypervisor, the VM admin, and the host admin.
confidential-ledger Authenticate Ledger Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authenticate-ledger-nodes.md
Azure confidential ledger nodes can be authenticated by code samples and by user
## Code samples
-When initializing, code samples get the node certificate by querying Identity Service. After retrieving the node certificate, a code sample will query the Ledger network to get a quote, which is then validated using the Host Verify binaries. If the verification succeeds, the code sample proceeds to Ledger operations.
+When initializing, code samples get the node certificate by querying Identity Service. After retrieving the node certificate, a code sample will query the ledger network to get a quote, which is then validated using the Host Verify binaries. If the verification succeeds, the code sample proceeds to ledger operations.
## Users
-Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their LedgerΓÇÖs enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, Steps 1-2 help build confidence in that Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger.
+Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their ledgerΓÇÖs enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, Steps 1-2 help build confidence in that Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger.
-- **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a network cert and thus helps verify that the Ledger node is presenting a cert endorsed/signed by the network cert for that specific instance. Similar to PKI-based HTTPS, a serverΓÇÖs cert is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA cert is returned by an Identity service in the form of a network cert. This is an important confidence building measure for users of confidential ledger. If this node cert isnΓÇÖt signed by the returned network cert, the client connection should fail (as implemented in the sample code).
+- **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a network cert and thus helps verify that the ledger node is presenting a cert endorsed/signed by the network cert for that specific instance. Similar to PKI-based HTTPS, a serverΓÇÖs cert is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA cert is returned by an Identity service in the form of a network cert. This is an important confidence building measure for users of confidential ledger. If this node cert isnΓÇÖt signed by the returned network cert, the client connection should fail (as implemented in the sample code).
- **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a Quote, a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote is structured in a way that enables easy verification. It contains claims that help identify various properties of the enclave and the application that it’s running. This is an important confidence building mechanism for users of the confidential ledger. This can be accomplished by calling a functional workflow API to get an enclave quote. The client connection should fail if the quote is invalid. The retrieved quote can then be validated with the open_enclaves Host_Verify tool. More details about this can be found [here](https://github.com/openenclave/openenclave/tree/master/samples/host_verify). ## Next steps
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
# Creating a Client Certificate
-The Azure confidential ledger APIs require client certificate-based authentication. Only those certificates added to an allowlist during Ledger Creation or Ledger Update can be used to call the confidential ledger Functional APIs.
+The Azure confidential ledger APIs require client certificate-based authentication. Only those certificates added to an allowlist during ledger creation or a ledger update can be used to call the confidential ledger Functional APIs.
-You will need a certificate in PEM format. You can create more than one certificate and add or delete them using Ledger Update API.
+You will need a certificate in PEM format. You can create more than one certificate and add or delete them using ledger Update API.
## OpenSSL
confidential-ledger Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/faq.md
- Title: Frequently asked questions for Azure confidential ledger
-description: Frequently asked questions for Azure confidential ledger
---- Previously updated : 04/15/2021-----
-# Frequently asked questions for Azure confidential ledger
-
-## How can I tell if the ACC Ledger service would be useful to my organization?
-
-Azure confidential ledger is ideal for organizations with records valuable enough for a motivated attacker to try to compromise the underlying logging/storage system, including "insider" scenarios where a rogue employee might attempt to forge, modify, or remove previous records.
-
-## What makes ACC Ledger much more secure?
-
-As its name suggests, the Ledger utilizes [Azure Confidential Computing platform](../confidential-computing/index.yml). One Ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The Ledger's integrity is maintained through a consensus-based blockchain.
-
-## When writing to the ACC Ledger, do I need to store write receipts?
-
-Not necessarily. Some solutions today require users to maintain write receipts for future log validation. This requires users to manage those receipts in a secure storage facility, which adds an extra burden. The Ledger eliminates this challenge through a Merkle tree-based approach, where write receipts include a full tree path to a signed root-of-trust. Users can verify transactions without storing or managing any Ledger data.
-
-## How do I verify Ledger's authenticity?
-
-You can verify that the Ledger server nodes that your client is communicating with are authentic. For details, see [Authenticating confidential ledger Nodes](authenticate-ledger-nodes.md).
---
-## Next steps
--- [Overview of Microsoft Azure confidential ledger](overview.md)
confidential-ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/overview.md
# Microsoft Azure confidential ledger (preview)
-Microsoft Azure confidential ledger (ACL), is a new and highly secure service for managing sensitive data records. Based on a permissioned blockchain model, Azure confidential ledger offers unique data integrity advantages. These include immutability, making the ledger append-only, and tamper proofing, to ensure all records are kept intact.
+Microsoft Azure confidential ledger (ACL) is a new and highly secure service for managing sensitive data records. It runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment which keeps potential attacks at bay. Furthermore, Azure confidential ledger runs on a minimalistic Trusted Computing Base (TCB), which ensures that no oneΓüáΓÇönot even MicrosoftΓüáΓÇöis "above" the ledger.
-The confidential ledger runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment which keeps potential attacks at bay. Furthermore, no one is "above" the Ledger, not even Microsoft. By designing ourselves out of the solution, Azure confidential ledger runs on a minimalistic Trusted Computing Base (TCB) which prevents access to Ledger service developers, datacenter technicians and cloud administrators.
+As its name suggests, Azure confidential ledger utilizes the [Azure Confidential Computing platform](../confidential-computing/index.yml) and the [Confidential Consortium Framework](https://www.microsoft.com/research/project/confidential-consortium-framework) to provide a high integrity solution that is tamper-protected and evident. One ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The ledger's integrity is maintained through a consensus-based blockchain.
-Azure confidential ledger appeals to use cases where critical metadata records must not be modified, including in perpetuity for regulatory compliance and archival purposes. Here are a few examples of things you can store on your Ledger:
+Azure confidential ledger offers unique data integrity advantages, including immutability, tamper-proofing, and append-only operations. These features, which ensure that all records are kept intact, are ideal when critical metadata records must not be modified, such as for regulatory compliance and archival purposes.
+
+Here are a few examples of things you can store on your ledger:
- Records relating to your business transactions (for example, money transfers or confidential document edits). - Updates to trusted assets (for example, core applications or contracts).
For more information, you can watch the [Azure confidential ledger demo](https:/
## Key Features
-The confidential ledger is exposed through REST APIs which can be integrated into new or existing applications. The confidential ledger can be managed by administrators utilizing Administrative APIs (Control Plane). It can also be called directly by application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete. The Functional APIs allow direct interaction with your instantiated Ledger and include operations such as put and get data.
+The confidential ledger is exposed through REST APIs which can be integrated into new or existing applications. The confidential ledger can be managed by administrators utilizing Administrative APIs (Control Plane). It can also be called directly by application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete. The Functional APIs allow direct interaction with your instantiated ledger and include operations such as put and get data.
## Ledger security
-This section defines the security protections for the Ledger. The Ledger APIs use client certificate-based authentication. Currently, the Ledger supports certificate-based authentication process with owner roles. We will be adding support for Azure Active Directory (AAD) based authentication and also role-based access (for example, owner, reader, and contributor).
+This section defines the security protections for the ledger. The ledger APIs use client certificate-based authentication. Currently, the ledger supports certificate-based authentication process with owner roles. We will be adding support for Azure Active Directory (AAD) based authentication and also role-based access (for example, owner, reader, and contributor).
-The data to the Ledger is sent through TLS 1.2 connection and the TLS 1.2 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
+The data to the ledger is sent through TLS 1.2 connection and the TLS 1.2 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
### Ledger storage
The confidential ledger can be managed by administrators utilizing Administrativ
The Functional APIs allow direct interaction with your instantiated confidential ledger and include operations such as put and get data.
-## Preview Limitations
+## Constraints
-- Once a confidential ledger is created, you cannot change the Ledger type.
+- Once a confidential ledger is created, you cannot change the ledger type.
- Azure confidential ledger does not support standard Azure Disaster Recovery at this time. However, Azure confidential ledger offers built-in redundancy within the Azure region, as the confidential ledger runs on multiple independent nodes. - Azure confidential ledger deletion leads to a "hard delete", so your data will not be recoverable after deletion. - Azure confidential ledger names must be globally unique. Ledgers with the same name, irrespective of their type, are not allowed.
The Functional APIs allow direct interaction with your instantiated confidential
|--|--| | ACL | Azure confidential ledger | | Ledger | An immutable append record of transactions (also known as a Blockchain) |
-| Commit | A confirmation that a transaction has been locally committed to a node. A local commit by itself does not guarantee that a transaction is part of the Ledger. |
-| Global commit | A confirmation that transaction was globally committed and is part of the Ledger. |
-| Receipt | Proof that the transaction was processed by the Ledger. |
+| Commit | A confirmation that a transaction has been locally committed to a node. A local commit by itself does not guarantee that a transaction is part of the ledger. |
+| Global commit | A confirmation that transaction was globally committed and is part of the ledger. |
+| Receipt | Proof that the transaction was processed by the ledger. |
## Next steps
confidential-ledger Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-powershell.md
+
+ Title: Quickstart ΓÇô Microsoft Azure confidential ledger with Azure PowerShell
+description: Learn to use the Microsoft Azure confidential ledger through Azure PowerShell
++ Last updated : 06/08/2022+++++
+# Quickstart: Create a confidential ledger using Azure PowerShell
+
+Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that must be kept intact. In this quickstart you will use [Azure PowerShell](/powershell/azure/) to create a confidential ledger, view and update its properties, and delete it. For more information on Azure confidential ledger, and for examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
++
+In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+
+## Create a resource group
++
+## Get your principal ID and tenant ID
+
+To create a confidential ledger, you'll need your Azure Active Directory principal ID (also called your object ID). To obtain your principal ID, use the Azure PowerShell [Get-AzADUser](/powershell/module/az.resources/get-azaduser) cmdlet, with the `-SignedIn` flag:
+
+```azurepowershell
+Get-AzADUser -SignedIn
+```
+
+Your result will be listed under "Id", in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+
+## Create a confidential ledger
+
+Use the Azure Powershell [New-AzConfidentialLedger](/powershell/module/az.confidentialledger/new-azconfidentialledger) command to create a confidential ledger in your new resource group.
+
+```azurepowershell
+New-AzConfidentialLedger -Name "myLedger" -ResourceGroupName "myResourceGroup" -Location "EastUS" -LedgerType "Public" -AadBasedSecurityPrincipal @{ LedgerRoleName="Administrator"; PrincipalId="34621747-6fc8-4771-a2eb-72f31c461f2e"; }
+
+```
+
+A successful operation will return the properties of the newly created ledger. Take note of the **ledgerUri**. In the example above, this URI is "https://myledger.confidential-ledger.azure.com".
+
+You'll need this URI to transact with the confidential ledger from the data plane.
+
+## View and update your confidential ledger properties
+
+You can view the properties associated with your newly created confidential ledger using the Azure PowerShell [Get-AzConfidentialLedger](/powershell/module/az.confidentialledger/get-azconfidentialledger) cmdlet.
+
+```azurepowershell
+Get-AzConfidentialLedger -Name "myLedger" -ResourceGroupName "myResourceGroup"
+```
+
+To update the properties of a confidential ledger, use do so, use the Azure PowerShell [Update-AzConfidentialLedger](/powershell/module/az.confidentialledger/update-azconfidentialledger) cmdlet. For instance, to update your ledger to change your role to "Reader", run:
+
+```azurepowershell
+Update-AzConfidentialLedger -Name "myLedger" -ResourceGroupName "myResourceGroup" -Location "EastUS" -LedgerType "Public" -AadBasedSecurityPrincipal @{ LedgerRoleName="Reader"; PrincipalId="34621747-6fc8-4771-a2eb-72f31c461f2e"; }
+```
+
+If you again run [Get-AzConfidentialLedger](/powershell/module/az.confidentialledger/get-azconfidentialledger), you'll see that the role has been updated.
+
+```json
+"ledgerRoleName": "Reader",
+```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you created a confidential ledger by using the Azure PowerShell. To learn more about Azure confidential ledger and how to integrate it with your applications, continue on to the articles below.
+
+- [Overview of Microsoft Azure confidential ledger](overview.md)
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Microsoft Azure confidential ledger is a new and highly secure service for manag
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-[API reference documentation](/python/api/overview/azure/keyvault-secrets-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger) | [Package (Python Package Index) Management Library](https://pypi.org/project/azure-mgmt-confidentialledger/)| [Package (Python Package Index) Client Library](https://pypi.org/project/azure-confidentialledger/)
+[API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-confidentialledger/latest/azure.confidentialledger.html) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger) | [Package (Python Package Index) Management Library](https://pypi.org/project/azure-mgmt-confidentialledger/)| [Package (Python Package Index) Client Library](https://pypi.org/project/azure-confidentialledger/)
## Prerequisites
connectors Connectors Create Api Oracledatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-oracledatabase.md
tags: connectors
Using the Oracle Database connector, you create organizational workflows that use data in your existing database. This connector can connect to an on-premises Oracle Database, or an Azure virtual machine with Oracle Database installed. With this connector, you can: * Build your workflow by adding a new customer to a customers database, or updating an order in an orders database.
-* Use actions to get a row of data, insert a new row, and even delete. For example, when a record is created in Dynamics CRM Online (a trigger), then insert a row in an Oracle Database (an action).
+* Use actions to get a row of data, insert a new row, and even delete. For example, when a record is created in Dynamics CRM Online (a trigger), then insert a row in an Oracle Database (an action).
This connector doesn't support the following items:
This article shows you how to use the Oracle Database connector in a logic app.
## Prerequisites
-* Supported Oracle versions:
+* Supported Oracle versions:
* Oracle 9 and later * Oracle Data Access Client (ODAC) 11.2 and later
-* Install the on-premises data gateway. [Connect to on-premises data from logic apps](../logic-apps/logic-apps-gateway-connection.md) lists the steps. The gateway is required to connect to an on-premises Oracle Database, or an Azure VM with Oracle DB installed.
+* Install the on-premises data gateway. [Connect to on-premises data from logic apps](../logic-apps/logic-apps-gateway-connection.md) lists the steps. The gateway is required to connect to an on-premises Oracle Database, or an Azure VM with Oracle DB installed.
> [!NOTE] > The on-premises data gateway acts as a bridge, and provides a secure data transfer between on-premises data (data that is not in the cloud) and your logic apps. The same gateway can be used with multiple services, and multiple data sources. So, you may only need to install the gateway once.
-* Install the Oracle Client on the machine where you installed the on-premises data gateway. Make sure that you install the 64-bit Oracle Data Provider for .NET from Oracle, and select the Windows installer version because the `xcopy` version doesn't work with the on-premises data gateway:
+* Install the Oracle Client on the machine where you installed the on-premises data gateway. Make sure that you install the 64-bit Oracle Data Provider for .NET from Oracle, and select the Windows installer version because the `xcopy` version doesn't work with the on-premises data gateway:
[64-bit ODAC 12c Release 4 (12.1.0.2.4) for Windows x64](https://www.oracle.com/technetwork/database/windows/downloads/index-090165.html)
This article shows you how to use the Oracle Database connector in a logic app.
## Add the connector > [!IMPORTANT]
-> This connector does not have any triggers. It has only actions. So when you create your logic app, add another trigger to start your logic app, such as **Schedule - Recurrence**, or **Request / Response - Response**.
+> This connector does not have any triggers. It has only actions. So when you create your logic app, add another trigger to start your logic app, such as **Schedule - Recurrence**, or **Request / Response - Response**.
1. In the [Azure portal](https://portal.azure.com), create a blank logic app.
-2. At the start of your logic app, select the **Request / Response - Request** trigger:
+2. At the start of your logic app, select the **Request / Response - Request** trigger:
![A dialog box has a box to search all triggers. There is also a single trigger shown, "Request / Response-Request", with a selection button.](./media/connectors-create-api-oracledatabase/request-trigger.png)
-3. Select **Save**. When you save, a request URL is automatically generated.
+3. Select **Save**. When you save, a request URL is automatically generated.
-4. Select **New step**, and select **Add an action**. Type in `oracle` to see the available actions:
+4. Select **New step**, and select **Add an action**. Type in `oracle` to see the available actions:
![A search box contains "oracle". The search produces one hit labeled "Oracle Database". There is a tabbed page, one tab showing "TRIGGERS (0)", another showing "ACTIONS (6)". Six actions are listed. The first of these is "Get row Preview".](./media/connectors-create-api-oracledatabase/oracledb-actions.png) > [!TIP]
- > This is also the quickest way to see the triggers and actions available for any connector. Type in part of the connector name, such as `oracle`. The designer lists any triggers and any actions.
+ > This is also the quickest way to see the triggers and actions available for any connector. Type in part of the connector name, such as `oracle`. The designer lists any triggers and any actions.
5. Select one of the actions, such as **Oracle Database - Get row**. Select **Connect via on-premises data gateway**. Enter the Oracle server name, authentication method, username, password, and select the gateway:
- ![The dialog box is titled "Oracle Database - Get row". There is a box, checked, labeled "Connect via on-premise data gateway". Below that are the five other text boxes.](./media/connectors-create-api-oracledatabase/create-oracle-connection.png)
+ ![The dialog box is titled "Oracle Database - Get row". There is a box, checked, labeled "Connect via on-premises data gateway". Below that are the five other text boxes.](./media/connectors-create-api-oracledatabase/create-oracle-connection.png)
6. Once connected, select a table from the list, and enter the row ID to your table. You need to know the identifier to the table. If you don't know, contact your Oracle DB administrator, and get the output from `select * from yourTableName`. This gives you the identifiable information you need to proceed.
- In the following example, job data is being returned from a Human Resources database:
+ In the following example, job data is being returned from a Human Resources database:
![The dialog box titled "Get row (Preview)" has two text boxes: "Table name", which contains "H R JOBS" and has a drop-down list, and "Row i d", which contains "S A _ REP".](./media/connectors-create-api-oracledatabase/table-rowid.png)
This article shows you how to use the Oracle Database connector in a logic app.
#### **Error**: The provider being used is deprecated: 'System.Data.OracleClient requires Oracle client software version 8.1.7 or greater.'. See [https://go.microsoft.com/fwlink/p/?LinkID=272376](/power-bi/connect-data/desktop-connect-oracle-database) to install the official provider.
-**Cause**: The Oracle client SDK is not installed on the machine where the on-premises data gateway is running. 
+**Cause**: The Oracle client SDK is not installed on the machine where the on-premises data gateway is running. 
**Resolution**: Download and install the Oracle client SDK on the same computer as the on-premises data gateway. #### **Error**: Table '[Tablename]' does not define any key columns
-**Cause**: The table does not have any primary key. 
+**Cause**: The table does not have any primary key. 
**Resolution**: The Oracle Database connector requires that a table with a primary key column be used.
-
+ ## Connector-specific details
-View any triggers and actions defined in the swagger, and also see any limits in the [connector details](/connectors/oracle/).
+View any triggers and actions defined in the swagger, and also see any limits in the [connector details](/connectors/oracle/).
## Get some help
-The [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html) is a great place to ask questions, answer questions, and see what other Logic Apps users are doing.
+The [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html) is a great place to ask questions, answer questions, and see what other Logic Apps users are doing.
-You can help improve Logic Apps and connectors by voting and submitting your ideas at [https://aka.ms/logicapps-wish](https://aka.ms/logicapps-wish).
+You can help improve Logic Apps and connectors by voting and submitting your ideas at [https://aka.ms/logicapps-wish](https://aka.ms/logicapps-wish).
## Next steps
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
See the Azure quickstart template [Create an Azure container group with VNet](ht
[az-container-delete]: /cli/azure/container#az-container-delete [az-network-vnet-delete]: /cli/azure/network/vnet#az-network-vnet-delete [az-group-delete]: /cli/azure/group#az-group-create
-[cloud-shell-bash]: /cloud-shell/overview.md
+[cloud-shell-bash]: /azure/cloud-shell/overview
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium Post
## Base setup
-### Set up PostgreSQL database if you haven't already.
+### Set up PostgreSQL database if you haven't already.
-This could be an existing on-premise database or you could [download and install one](https://www.postgresql.org/download/) on your local machine. It's also possible to use a [Docker container](https://hub.docker.com/_/postgres).
+This could be an existing on-premises database or you could [download and install one](https://www.postgresql.org/download/) on your local machine. It's also possible to use a [Docker container](https://hub.docker.com/_/postgres).
[!INCLUDE [pull-image-include](../../../includes/pull-image-include.md)] To start a container:
CREATE TABLE retail.orders_by_customer (order_id int, customer_id int, purchase_
CREATE TABLE retail.orders_by_city (order_id int, customer_id int, purchase_amount int, city text, purchase_time timestamp, PRIMARY KEY (city,order_id)) WITH cosmosdb_cell_level_timestamp=true AND cosmosdb_cell_level_timestamp_tombstones=true AND cosmosdb_cell_level_timetolive=true; ```
-### Setup Apache Kafka
+### Setup Apache Kafka
This article uses a local cluster, but you can choose any other option. [Download Kafka](https://kafka.apache.org/downloads), unzip it, start the Zookeeper and Kafka cluster.
You can continue to insert more data into PostgreSQL and confirm that the record
* [Integrate Apache Kafka and Azure Cosmos DB Cassandra API using Kafka Connect](kafka-connect.md) * [Integrate Apache Kafka Connect on Azure Event Hubs (Preview) with Debezium for Change Data Capture](../../event-hubs/event-hubs-kafka-connect-debezium.md) * [Migrate data from Oracle to Azure Cosmos DB Cassandra API using Arcion](oracle-migrate-cosmos-db-arcion.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
* [Partition key best practices](../partitioning-overview.md#choose-partitionkey) * [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Azure portal support for role management is not available yet.
### Which SDKs in Azure Cosmos DB SQL API support RBAC?
-The [.NET V3](sql-api-sdk-dotnet-standard.md), [Java V4](sql-api-sdk-java-v4.md) and [JavaScript V3](sql-api-sdk-node.md) SDKs are currently supported.
+The [.NET V3](sql-api-sdk-dotnet-standard.md), [Java V4](sql-api-sdk-java-v4.md), [JavaScript V3](sql-api-sdk-node.md) and [Python V4.3+](sql-api-sdk-python.md) SDKs are currently supported.
### Is the Azure AD token automatically refreshed by the Azure Cosmos DB SDKs when it expires?
cosmos-db Monitor Cosmos Db Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-cosmos-db-reference.md
All the metrics corresponding to Azure Cosmos DB are stored in the namespace **C
|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Legacy metric mapping | Usage | ||||| | | | | MongoRequestCharge (Mongo Request Charge) | Count (Total) |Mongo Request Units Consumed| DatabaseName, CollectionName, Region, CommandName, ErrorCode| All |Mongo Query Request Charge, Mongo Update Request Charge, Mongo Delete Request Charge, Mongo Insert Request Charge, Mongo Count Request Charge| Used to monitor Mongo resource RUs in a minute.|
-| TotalRequestUnits (Total Request Units)| Count (Total) | Request Units consumed| DatabaseName, CollectionName, Region, StatusCode |All| TotalRequestUnits| Used to monitor Total RU usage at a minute granularity. To get average RU consumed per second, use Total aggregation at minute and divide by 60.|
+| TotalRequestUnits (Total Request Units)| Count (Total) | Request Units consumed| DatabaseName, CollectionName, Region, StatusCode |All| TotalRequestUnits| Used to monitor Total RU usage at a minute granularity. To get average RU consumed per second, use Sum aggregation at minute interval/level and divide by 60.|
| ProvisionedThroughput (Provisioned Throughput)| Count (Maximum) |Provisioned throughput at container granularity| DatabaseName, ContainerName| 5M| | Used to monitor provisioned throughput per container.| ### Storage metrics
The following table lists the properties of resource logs in Azure Cosmos DB. Th
| **statusCode** | **statusCode_s** | The response status of the operation. | | **requestResourceId** | **ResourceId** | The resourceId that pertains to the request. Depending on the operation performed, this value may point to `databaseRid`, `collectionRid`, or `documentRid`.| | **clientIpAddress** | **clientIpAddress_s** | The client's IP address. |
-| **requestCharge** | **requestCharge_s** | The number of RU/s that are used by the operation |
+| **requestCharge** | **requestCharge_s** | The number of RUs that are used by the operation |
| **collectionRid** | **collectionId_s** | The unique ID for the collection.| | **duration** | **duration_d** | The duration of the operation, in milliseconds. | | **requestLength** | **requestLength_s** | The length of the request, in bytes. |
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-hbase-to-cosmos-db.md
The key differences between the data structure of Azure Cosmos DB and HBase are
* HBase uses timestamp to version multiple instances of a given cell. You can query different versions of a cell using timestamp.
-* Azure Cosmos DB ships with the [Change feed feature](../change-feed.md) which tracks persistent record of changes to a container in the order they occur. It then outputs the sorted list of documents that were changed in the order in which they were modified.
+* Azure Cosmos DB ships with the [Change feed feature](../change-feed.md) which tracks persistent record of changes to a container in the order they occur. It then outputs the sorted list of documents that were changed in the order in which they were modified.
**Data format**
sqlline.py ZOOKEEPER/hbase-unsecure
#### Get the table details ```console
-!describe <Table Name>
+!describe <Table Name>
``` #### Get the index details
sqlline.py ZOOKEEPER/hbase-unsecure
### Get the primary key details ```console
-!primarykeys <Table Name>
+!primarykeys <Table Name>
``` ## Migrate your data
Data Factory's Copy activity supports HBase as a data source. See the [Copy data
You can specify Cosmos DB (SQL API) as the destination for your data. See the [Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory](../../data-factory/connector-azure-cosmos-db.md) article for more details. ### Migrate using Apache Spark - Apache HBase Connector & Cosmos DB Spark connector
For Azure Cosmos DB Spark connector, refer to the [Quick Start Guide](create-sql
|"personalPhone":{"cf":"Personal", "col":"Phone", "type":"string"} |} |}""".stripMargin
-
+ ```
-1. Next, define a method to get the data from the HBase Contacts table as a DataFrame.
+1. Next, define a method to get the data from the HBase Contacts table as a DataFrame.
```scala def withCatalog(cat: String): DataFrame = {
For Azure Cosmos DB Spark connector, refer to the [Quick Start Guide](create-sql
.format("org.apache.spark.sql.execution.datasources.hbase") .load() }
-
+ ``` 1. Create a DataFrame using the defined method.
The mappings for code migration are shown here, but the HBase RowKeys and Azure
**HBase** ```java
-Configuration config = HBaseConfiguration.create();
-config.set("hbase.zookeeper.quorum","zookeepernode0,zookeepernode1,zookeepernode2");
-config.set("hbase.zookeeper.property.clientPort", "2181");
-config.set("hbase.cluster.distributed", "true");
+Configuration config = HBaseConfiguration.create();
+config.set("hbase.zookeeper.quorum","zookeepernode0,zookeepernode1,zookeepernode2");
+config.set("hbase.zookeeper.property.clientPort", "2181");
+config.set("hbase.cluster.distributed", "true");
Connection connection = ConnectionFactory.createConnection(config) ``` **Phoenix** ```java
-//Use JDBC to get a connection to an HBase cluster
+//Use JDBC to get a connection to an HBase cluster
Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333",props); ``` **Azure Cosmos DB** ```java
-// Create sync client
-client = new CosmosClientBuilder()
- .endpoint(AccountSettings.HOST)
- .key(AccountSettings.MASTER_KEY)
- .consistencyLevel(ConsistencyLevel.{ConsistencyLevel})
- .contentResponseOnWriteEnabled(true)
+// Create sync client
+client = new CosmosClientBuilder()
+ .endpoint(AccountSettings.HOST)
+ .key(AccountSettings.MASTER_KEY)
+ .consistencyLevel(ConsistencyLevel.{ConsistencyLevel})
+ .contentResponseOnWriteEnabled(true)
.buildClient(); ```
client = new CosmosClientBuilder()
**HBase** ```java
-// create an admin object using the config
-HBaseAdmin admin = new HBaseAdmin(config);
-// create the table...
-HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("FamilyTable"));
-// ... with single column families
-tableDescriptor.addFamily(new HColumnDescriptor("ColFam"));
+// create an admin object using the config
+HBaseAdmin admin = new HBaseAdmin(config);
+// create the table...
+HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("FamilyTable"));
+// ... with single column families
+tableDescriptor.addFamily(new HColumnDescriptor("ColFam"));
admin.createTable(tableDescriptor); ```
CREATE IF NOT EXISTS FamilyTable ("id" BIGINT not null primary key, "ColFam"."la
**Azure Cosmos DB** ```java
-// Create database if not exists
-CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
-database = client.getDatabase(databaseResponse.getProperties().getId());
+// Create database if not exists
+CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
+database = client.getDatabase(databaseResponse.getProperties().getId());
-// Create container if not exists
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("FamilyContainer", "/lastName");
+// Create container if not exists
+CosmosContainerProperties containerProperties = new CosmosContainerProperties("FamilyContainer", "/lastName");
-// Provision throughput
-ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
+// Provision throughput
+ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
-// Create container with 400 RU/s
-CosmosContainerResponse databaseResponse = database.createContainerIfNotExists(containerProperties, throughputProperties);
+// Create container with 400 RU/s
+CosmosContainerResponse databaseResponse = database.createContainerIfNotExists(containerProperties, throughputProperties);
container = database.getContainer(databaseResponse.getProperties().getId()); ```
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
# Quickstart: Azure Cosmos DB SQL API client library for .NET+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
+>
> * [.NET](quickstart-dotnet.md)
+>
Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks. > [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-dotnet-quickstart) are available on GitHub as a .NET project.
-[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md)
## Prerequisites
Get started with the Azure Cosmos DB client library for .NET to create databases
## Setting up
-This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources.
+This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources.
### Create an Azure Cosmos DB account
This quickstart will create a single Azure Cosmos DB account using the SQL API.
1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
-1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
+1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
```azurepowershell-interactive $parameters = @{
This quickstart will create a single Azure Cosmos DB account using the SQL API.
New-AzResourceGroup @parameters ```
-1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB SQL API account with default settings.
+1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB SQL API account with default settings.
```azurepowershell-interactive $parameters = @{
This quickstart will create a single Azure Cosmos DB account using the SQL API.
1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
-1. Select **Go to resource** to go to the Azure Cosmos DB account page.
+1. Select **Go to resource** to go to the Azure Cosmos DB account page.
:::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos D B SQL A P I resource.":::
This quickstart will create a single Azure Cosmos DB account using the SQL API.
:::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos D B SQL A P I account.":::
+#### [Resource Manager template](#tab/azure-resource-manager)
+
+> [!NOTE]
+> Azure Resource Manager templates are written in two syntaxes, JSON and Bicep. This sample uses the [Bicep](../../azure-resource-manager/bicep/overview.md) syntax. To learn more about the two syntaxes, see [comparing JSON and Bicep for templates](../../azure-resource-manager/bicep/compare-template-syntax.md).
+
+1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos"
+
+ # Variable for location
+ location="westus"
+
+ # Variable for account name with a randomnly generated suffix
+ let suffix=$RANDOM*$RANDOM
+ accountName="msdocs-$suffix"
+ ```
+
+1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
+
+1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
+
+ ```azurecli-interactive
+ az group create \
+ --name $resourceGroupName \
+ --location $location
+ ```
+
+1. Create a new ``.bicep`` file with the deployment template in the Bicep syntax.
+
+ :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-sql-minimal/main.bicep":::
+
+1. Deploy the Azure Resource Manager (ARM) template with [``az deployment group create``](/cli/azure/deployment/group#az-deployment-group-create)
+specifying the filename using the **template-file** parameter and the name ``initial-bicep-deploy`` using the **name** parameter.
+
+ ```azurecli-interactive
+ az deployment group create \
+ --resource-group $resourceGroupName \
+ --name initial-bicep-deploy \
+ --template-file main.bicep \
+ --parameters accountName=$accountName
+ ```
+
+ > [!NOTE]
+ > In this example, we assume that the name of the Bicep file is **main.bicep**.
+
+1. Validate the deployment by showing metadata from the newly created account using [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show).
+
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName
+ ```
+ ### Create a new .NET app
For more information about the hierarchy of different resources, see [working wi
You'll use the following .NET classes to interact with these resources: -- [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.-- [``Database``](/dotnet/api/microsoft.azure.cosmos.database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.-- [``Container``](/dotnet/api/microsoft.azure.cosmos.container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.-- [``QueryDefinition``](/dotnet/api/microsoft.azure.cosmos.querydefinition) - This class represents a SQL query and any query parameters.-- [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) - This class represents an iterator that can track the current page of results and get a new page of results.-- [``FeedResponse<>``](/dotnet/api/microsoft.azure.cosmos.feedresponse-1) - This class represents a single page of responses from the iterator. This type can be iterated over using a ``foreach`` loop.
+* [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+* [``Database``](/dotnet/api/microsoft.azure.cosmos.database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+* [``Container``](/dotnet/api/microsoft.azure.cosmos.container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+* [``QueryDefinition``](/dotnet/api/microsoft.azure.cosmos.querydefinition) - This class represents a SQL query and any query parameters.
+* [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) - This class represents an iterator that can track the current page of results and get a new page of results.
+* [``FeedResponse<>``](/dotnet/api/microsoft.azure.cosmos.feedresponse-1) - This class represents a single page of responses from the iterator. This type can be iterated over using a ``foreach`` loop.
## Code examples -- [Authenticate the client](#authenticate-the-client)-- [Create a database](#create-a-database)-- [Create a container](#create-a-container)-- [Create an item](#create-an-item)-- [Get an item](#get-an-item)-- [Query items](#query-items)
+* [Authenticate the client](#authenticate-the-client)
+* [Create a database](#create-a-database)
+* [Create a container](#create-a-container)
+* [Create an item](#create-an-item)
+* [Get an item](#get-an-item)
+* [Query items](#query-items)
-The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` database is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
For this sample code, the container will use the category as a logical partition key.
Created item: 68719518391 [gear-surf-surfboards]
When you no longer need the Azure Cosmos DB SQL API account, you can delete the corresponding resource group.
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI / Resource Manager template](#tab/azure-cli+azure-resource-manager)
Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delet
az group delete --name $resourceGroupName ```
-#### [PowerShell](#tab/azure-powershell)
+### [PowerShell](#tab/azure-powershell)
Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
$parameters = @{
Remove-AzResourceGroup @parameters ```
-#### [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
1. Navigate to the resource group you previously created in the Azure portal.
cost-management-billing Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscription.md
Use the following procedure to create a subscription for yourself or for someone
1. Select the **Advanced** tab. 1. Select your **Subscription directory**. It's the Azure Active Directory (Azure AD) where the new subscription will get created. 1. Select a **Management group**. It's the Azure AD management group that the new subscription is associated with. You can only select management groups in the current directory.
-1. Select more or more **Subscription owners**. You can select only users or service principals in the selected subscription directory. You can't select guest directory users. If you select a service principal, enter its App ID.
+1. Select one or more **Subscription owners**. You can select only users or service principals in the selected subscription directory. You can't select guest directory users. If you select a service principal, enter its App ID.
:::image type="content" source="./media/create-subscription/create-subscription-advanced-tab.png" alt-text="Screenshot showing the Advanced tab where you can specify the directory, management group, and owner. " lightbox="./media/create-subscription/create-subscription-advanced-tab.png" ::: 1. Select the **Tags** tab. 1. Enter tag pairs for **Name** and **Value**.
If you have questions or need help, [create a support request](https://go.micros
- [Add or change Azure subscription administrators](add-change-subscription-administrator.md) - [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md) - [Create management groups for resource organization and management](../../governance/management-groups/create-management-group-portal.md)-- [Cancel your Azure subscription](cancel-azure-subscription.md)
+- [Cancel your Azure subscription](cancel-azure-subscription.md)
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 04/18/2022 Last updated : 06/12/2022 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in Azure Data Factory and Synapse Analytics
The token was obtained from the original tenant, but the service is in guest ten
You should use the token issued from guest tenant. For example, you have to assign the same Azure Active Directory to be your guest tenant and your DevOps, so it can correctly set token behavior and use the correct tenant.
-### Template parameters in the parameters file are not valid
+### Template parameters in the parameters file aren't valid
#### Issue
Following section is not valid because package.json folder is not valid.
displayName: 'Validate' ``` It should have DataFactory included in customCommand like *'run build validate $(Build.Repository.LocalPath)/DataFactory/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'*. Make sure the generated YAML file for higher stage should have required JSON artifacts.-
-### Git Repository or Microsoft Purview connection disconnected
-
-#### Issue
-When deploying a service instance, the git repository or purview connection is disconnected.
-
-#### Cause
-If you have **Include in ARM template** selected for deploying global parameters, your service instance is included in the ARM template. As a result, other properties will be removed upon deployment.
-
-#### Resolution
-Unselect **Include in ARM template** and deploy global parameters with PowerShell as described in Global parameters in CI/CD.
+
### Extra left "[" displayed in published JSON file
After some amount of time, new pipeline runs begin to succeed without any user a
#### Cause
-There are several scenarios which can trigger this behavior, all of which involve a new version of a dependent resource being called by the old version of the parent resource. For example, suppose an existing child pipeline called by ΓÇ£Execute pipelineΓÇ¥ is updated to have required parameters and the existing parent pipeline is updated to pass these parameters. If the deployment occurs during a parent pipeline execution, but before the **Execute Pipeline** activity, the old version of the pipeline will call the new version of the child pipeline, and the expected parameters will not be passed. This will cause the pipeline to fail with a *UserError*. This can also occur with other types of dependencies, such as if a breaking change is made to linked service during a pipeline run that references it.
+There are several scenarios, which can trigger this behavior, all of which involve a new version of a dependent resource being called by the old version of the parent resource. For example, suppose an existing child pipeline called by ΓÇ£Execute pipelineΓÇ¥ is updated to have required parameters and the existing parent pipeline is updated to pass these parameters. If the deployment occurs during a parent pipeline execution, but before the **Execute Pipeline** activity, the old version of the pipeline will call the new version of the child pipeline, and the expected parameters will not be passed. This will cause the pipeline to fail with a *UserError*. This can also occur with other types of dependencies, such as if a breaking change is made to linked service during a pipeline run that references it.
#### Resolution
New runs of the parent pipeline will automatically begin succeeding, so typicall
Need to parameterize linked service integration run time #### Cause
-This feature is not supported.
+This feature isn't supported.
#### Resolution You have to select manually and set an integration runtime. You can use PowerShell API to change as well. This change can have downstream implications.
You have to select manually and set an integration runtime. You can use PowerShe
Changing Integration runtime name during CI/CD deployment. #### Cause
-Parameterizing an entity reference (Integration runtime in Linked service, Dataset in activity, Linked Service in dataset) is not supported. Changing the runtime name during deployment will cause the depended resource (Resource referencing the Integration runtime) to become malformed with invalid reference.
+Parameterizing an entity reference (Integration runtime in Linked service, Dataset in activity, Linked Service in dataset) isn't supported. Changing the runtime name during deployment will cause the depended resource (Resource referencing the Integration runtime) to become malformed with invalid reference.
#### Resolution Data Factory requires you to have the same name and type of integration runtime across all stages of CI/CD.
Data Factory requires you to have the same name and type of integration runtime
### ARM template deployment failing with error DataFactoryPropertyUpdateNotSupported ##### Issue
-ARM template deployment fails with an error such as DataFactoryPropertyUpdateNotSupported: Updating property type is not supported.
+ARM template deployment fails with an error such as DataFactoryPropertyUpdateNotSupported: Updating property type isn't supported.
##### Cause
-The ARM template deployment is attempting to change the type of an existing integration runtime. This is not allowed and will cause a deployment failure because data factory requires the same name and type of integration runtime across all stages of CI/CD.
+The ARM template deployment is attempting to change the type of an existing integration runtime. This isn't allowed and will cause a deployment failure because data factory requires the same name and type of integration runtime across all stages of CI/CD.
##### Resolution If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type. For more information, refer to [Continuous integration and delivery - Azure Data Factory](./continuous-integration-delivery.md#best-practices-for-cicd)
If you want to share integration runtimes across all stages, consider using a te
### GIT publish may fail because of PartialTempTemplates files #### Issue
-When you have 1000 s of old temporary ARM json files in PartialTemplates folder, publish may fail.
+When you've 1000 s of old temporary ARM json files in PartialTemplates folder, publish may fail.
#### Cause On publish, ADF fetches every file inside each folder in the collaboration branch. In the past, publishing generated two folders in the publish branch: PartialArmTemplates and LinkedTemplates. PartialArmTemplates files are no longer generated. However, because there can be many old files (thousands) in the PartialArmTemplates folder, this may result in many requests being made to GitHub on publish and the rate limit being hit.
data-factory Concepts Data Flow Performance Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-pipelines.md
If you execute your data flow activities in sequence, it is recommended that you
## Overloading a single data flow
-If you put all of your logic inside of a single data flow, the service will execute the entire job on a single Spark instance. While this may seem like a way to reduce costs, it mixes together different logical flows and can be difficult to monitor and debug. If one component fails, all other parts of the job will fail as well. Organizing data flows by independent flows of business logic is recommended. If your data flow becomes too large, splitting it into separates components will make monitoring and debugging easier. While there is no hard limit on the number of transformations in a data flow, having too many will make the job complex.
+If you put all of your logic inside of a single data flow, the service will execute the entire job on a single Spark instance. While this may seem like a way to reduce costs, it mixes together different logical flows and can be difficult to monitor and debug. If one component fails, all other parts of the job will fail as well. Organizing data flows by independent flows of business logic is recommended. If your data flow becomes too large, splitting it into separate components will make monitoring and debugging easier. While there is no hard limit on the number of transformations in a data flow, having too many will make the job complex.
## Execute sinks in parallel
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
Previously updated : 05/26/2022 Last updated : 06/10/2022 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
Specifically, this Salesforce connector supports:
- Copying data from and to Salesforce production, sandbox, and custom domain. >[!NOTE]
->This function supports copy of any schema from the above mentioned Salesforce environments, including the [Nonprofit Success Pack](https://www.salesforce.org/products/nonprofit-success-pack/) (NPSP). This allows you to bring your Salesforce nonprofit data into Azure, work with it in Azure data services, unify it with other data sets, and visualize it in Power BI for rapid insights.
+>This function supports copy of any schema from the above mentioned Salesforce environments, including the [Nonprofit Success Pack](https://www.salesforce.org/products/nonprofit-success-pack/) (NPSP).
The Salesforce connector is built on top of the Salesforce REST/Bulk API. When copying data from Salesforce, the connector automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; You can explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service. When copying data to Salesforce, the connector uses BULK API v1.
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-access-strategies.md
Last updated 01/26/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-A vital security goal of an organization is to protect their data stores from random access over the internet, may it be an on-premise or a Cloud/ SaaS data store.
+A vital security goal of an organization is to protect their data stores from random access over the internet, may it be an on-premises or a Cloud/ SaaS data store.
Typically a cloud data store controls access using the below mechanisms: * Private Link from a Virtual Network to Private Endpoint enabled data sources
Typically a cloud data store controls access using the below mechanisms:
> [!TIP] > With the [introduction of Static IP address range](./azure-integration-runtime-ip-addresses.md), you can now allow list IP ranges for the particular Azure integration runtime region to ensure you donΓÇÖt have to allow all Azure IP addresses in your cloud data stores. This way, you can restrict the IP addresses that are permitted to access the data stores.
-> [!NOTE]
-> The IP address ranges are blocked for Azure Integration Runtime and is currently only used for Data Movement, pipeline and external activities. Dataflows and Azure Integration Runtime that enable Managed Virtual Network now do not use these IP ranges.
+> [!NOTE]
+> The IP address ranges are blocked for Azure Integration Runtime and is currently only used for Data Movement, pipeline and external activities. Dataflows and Azure Integration Runtime that enable Managed Virtual Network now do not use these IP ranges.
-This should work in many scenarios, and we do understand that a unique Static IP address per integration runtime would be desirable, but this wouldn't be possible using Azure Integration Runtime currently, which is serverless. If necessary, you can always set up a Self-hosted Integration Runtime and use your Static IP with it.
+This should work in many scenarios, and we do understand that a unique Static IP address per integration runtime would be desirable, but this wouldn't be possible using Azure Integration Runtime currently, which is serverless. If necessary, you can always set up a Self-hosted Integration Runtime and use your Static IP with it.
## Data access strategies through Azure Data Factory * **[Private Link](../private-link/private-link-overview.md)** - You can create an Azure Integration Runtime within Azure Data Factory Managed Virtual Network and it will leverage private endpoints to securely connect to supported data stores. Traffic between Managed Virtual Network and data sources travels the Microsoft backbone network and are not exposure to public network.
-* **[Trusted Service](../storage/common/storage-network-security.md#exceptions)** - Azure Storage (Blob, ADLS Gen2) supports firewall configuration that enables select trusted Azure platform services to access the storage account securely. Trusted Services enforces Managed Identity authentication, which ensures no other data factory can connect to this storage unless approved to do so using it's managed identity. You can find more details in **[this blog](https://techcommunity.microsoft.com/t5/azure-data-factory/data-factory-is-now-a-trusted-service-in-azure-storage-and-azure/ba-p/964993)**. Hence, this is extremely secure and recommended.
-* **Unique Static IP** - You will need to set up a self-hosted integration runtime to get a Static IP for Data Factory connectors. This mechanism ensures you can block access from all other IP addresses.
+* **[Trusted Service](../storage/common/storage-network-security.md#exceptions)** - Azure Storage (Blob, ADLS Gen2) supports firewall configuration that enables select trusted Azure platform services to access the storage account securely. Trusted Services enforces Managed Identity authentication, which ensures no other data factory can connect to this storage unless approved to do so using it's managed identity. You can find more details in **[this blog](https://techcommunity.microsoft.com/t5/azure-data-factory/data-factory-is-now-a-trusted-service-in-azure-storage-and-azure/ba-p/964993)**. Hence, this is extremely secure and recommended.
+* **Unique Static IP** - You will need to set up a self-hosted integration runtime to get a Static IP for Data Factory connectors. This mechanism ensures you can block access from all other IP addresses.
* **[Static IP range](./azure-integration-runtime-ip-addresses.md)** - You can use Azure Integration Runtime's IP addresses to allow list it in your storage (say S3, Salesforce, etc.). It certainly restricts IP addresses that can connect to the data stores but also relies on Authentication/ Authorization rules. * **[Service Tag](../virtual-network/service-tags-overview.md)** - A service tag represents a group of IP address prefixes from a given Azure service (like Azure Data Factory). Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. It is useful when filtering data access on IaaS hosted data stores in Virtual Network.
-* **Allow Azure Services** - Some services lets you allow all Azure services to connect to it in case you choose this option.
+* **Allow Azure Services** - Some services lets you allow all Azure services to connect to it in case you choose this option.
-For more information about supported network security mechanisms on data stores in Azure Integration Runtime and Self-hosted Integration Runtime, see below two tables.
+For more information about supported network security mechanisms on data stores in Azure Integration Runtime and Self-hosted Integration Runtime, see below two tables.
* **Azure Integration Runtime** | Data Stores | Supported Network Security Mechanism on Data Stores | Private Link | Trusted Service | Static IP range | Service Tags | Allow Azure Services |
For more information about supported network security mechanisms on data stores
| | Azure SQL DB, Azure Synapse Analytics), SQL Ml | Yes (only Azure SQL DB/DW) | - | Yes | - | Yes | | | Azure Key Vault (for fetching secrets/ connection string) | yes | Yes | Yes | - | - | | Other PaaS/ SaaS Data stores | AWS S3, SalesForce, Google Cloud Storage, etc. | - | - | Yes | - | - |
- | Azure laaS | SQL Server, Oracle, etc. | - | - | Yes | Yes | - |
- | On-premise laaS | SQL Server, Oracle, etc. | - | - | Yes | - | - |
-
- **Applicable only when Azure Data Explorer is virtual network injected, and IP range can be applied on NSG/ Firewall.*
+ | Azure IaaS | SQL Server, Oracle, etc. | - | - | Yes | Yes | - |
+ | On-premises IaaS | SQL Server, Oracle, etc. | - | - | Yes | - | - |
+
+ **Applicable only when Azure Data Explorer is virtual network injected, and IP range can be applied on NSG/ Firewall.*
+
+* **Self-hosted Integration Runtime (in VNet/on-premises)**
-* **Self-hosted Integration Runtime (in Vnet/on-premise)**
-
| Data Stores | Supported Network Security Mechanism on Data Stores | Static IP | Trusted Services | |--||--|| | Azure PaaS Data stores | Azure Cosmos DB | Yes | - |
For more information about supported network security mechanisms on data stores
| | Azure Key Vault (for fetching secrets/ connection string) | Yes | Yes | | Other PaaS/ SaaS Data stores | AWS S3, SalesForce, Google Cloud Storage, etc. | Yes | - | | Azure laaS | SQL Server, Oracle, etc. | Yes | - |
- | On-premise laaS | SQL Server, Oracle, etc. | Yes | - |
+ | On-premise laaS | SQL Server, Oracle, etc. | Yes | - |
## Next steps
data-factory Encrypt Credentials Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md
Title: Encrypt credentials in Azure Data Factory
-description: Learn how to encrypt and store credentials for your on-premises data stores on a machine with self-hosted integration runtime.
+ Title: Encrypt credentials in Azure Data Factory
+description: Learn how to encrypt and store credentials for your on-premises data stores on a machine with self-hosted integration runtime.
Last updated 01/27/2022-+
You pass a JSON definition file with credentials to the <br/>[**New-AzDataFactor
## Create a linked service with encrypted credentials
-This example shows how to create a linked service to an on-premise SQL Server data source with encrypted credentials.
+This example shows how to create a linked service to an on-premises SQL Server data source with encrypted credentials.
### Create initial linked service JSON file description
-Create a JSON file named **SqlServerLinkedService.json** in any folder with the following content:
+Create a JSON file named **SqlServerLinkedService.json** in any folder with the following content:
-Replace `<servername>`, `<databasename>`, `<username>`, and `<password>` with values for your SQL Server before saving the file. And, replace `<integration runtime name>` with the name of your integration runtime.
+Replace `<servername>`, `<databasename>`, `<username>`, and `<password>` with values for your SQL Server before saving the file. And, replace `<integration runtime name>` with the name of your integration runtime.
```json {
New-AzDataFactoryV2LinkedServiceEncryptedCredential -DataFactoryName $dataFactor
Now, use the output JSON file from the previous command containing the encrypted credential to set up the **SqlServerLinkedService**. ```powershell
-Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $ResourceGroupName -Name "EncryptedSqlServerLinkedService" -DefinitionFile ".\encryptedSqlServerLinkedService.json"
+Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $ResourceGroupName -Name "EncryptedSqlServerLinkedService" -DefinitionFile ".\encryptedSqlServerLinkedService.json"
``` ## Next steps
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
Previously updated : 01/26/2022 Last updated : 06/13/2022
The below table lists the properties supported by a delta sink. You can edit the
| Vacuum | Specify retention threshold in hours for older versions of table. A value of 0 or less defaults to 30 days | yes | Integer | vacuum | | Update method | Specify which update operations are allowed on the delta lake. For methods that aren't insert, a preceding alter row transformation is required to mark rows. | yes | `true` or `false` | deletable <br> insertable <br> updateable <br> merge | | Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true |
-| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
+| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
### Delta sink script example
moviesAltered sink(
folderPath: $tempPath + '/delta' ) ~> movieDB ```
+### Delta sink with partition pruning
+With this option under Update method above (i.e. update/upsert/delete), you can limit the number of partitions that are inspected. Only partitions satisfying this condition will be fetched from the target store. You can specify fixed set of values that a partition column may take.
++
+### Delta sink script example with partition pruning
+
+A sample script is given as below.
+
+```
+DerivedColumn1 sink(
+ input(movieId as integer,
+ title as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ format: 'delta',
+ container: 'deltaContainer',
+ folderPath: 'deltaPath',
+ mergeSchema: false,
+ autoCompact: false,
+ optimizedWrite: false,
+ vacuum: 0,
+ deletable:false,
+ insertable:true,
+ updateable:true,
+ upsertable:false,
+ keys:['movieId'],
+ pruneCondition:['part_col' -> ([5, 8])],
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink2
+
+```
+Delta will only read 2 partitions where **part_col == 5 and 8** from the target delta store instead of all partitions. *part_col* is a column that the target delta data is partitioned by. It need not be present in the source data.
+
+### Delta sink optimization options
+
+In Settings tab, you will find three more options to optimize delta sink transformation.
+
+* When **Merge schema** option is enabled, any columns that are present in the previous stream, but not in the Delta table, are automatically added on to the end of the schema.
+
+* When **Auto compact** is enabled, after an individual write, transformation checks if files can further be compacted, and runs a quick OPTIMIZE job (with 128 MB file sizes instead of 1GB) to further compact files for partitions that have the most number of small files. Auto compaction helps in coalescing a large number of small files into a smaller number of large files. Auto compaction only kicks in when there are at least 50 files. Once a compaction operation is performed, it creates a new version of the table, and writes a new file containing the data of several previous files in a compact compressed form.
+
+* When **Optimize write** is enabled, sink transformation dynamically optimizes partition sizes based on the actual data by attempting to write out 128 MB files for each table partition. This is an approximate size and can vary depending on dataset characteristics. Optimized writes improve the overall efficiency of the *writes and subsequent reads*. It organizes partitions such that the performance of subsequent reads will improve.
+ ### Known limitations
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-ordered.md
# Tutorial: Create export order for Azure Data Box
-Azure Data Box is a hybrid solution that allows you to move data out of Azure into your location. This tutorial describes how to create an export order for Azure Data Box. The main reason to create an export order is for disaster recovery, in case on-premise storage gets compromised and a back-up needs to be restored.
+Azure Data Box is a hybrid solution that allows you to move data out of Azure into your location. This tutorial describes how to create an export order for Azure Data Box. The main reason to create an export order is for disaster recovery, in case on-premises storage gets compromised and a back-up needs to be restored.
In this tutorial, you learn about:
Perform the following steps in the Azure portal to order a device.
![Security screen showing Encryption type settings](./media/data-box-deploy-export-ordered/customer-managed-key-01.png) 11. Select **Customer managed key** as the key type. Then select **Select a key vault and key**.
-
+ ![Security screen, settings for a customer-managed key](./media/data-box-deploy-export-ordered/customer-managed-key-02.png) 12. On the **Select key from Azure Key Vault** screen, the subscription is automatically populated.
Perform the following steps in the Azure portal to order a device.
15. Select a user identity that you'll use to manage access to this resource. Choose **Select a user identity**. In the panel on the right, select the subscription and the managed identity to use. Then choose **Select**.
- A user-assigned managed identity is a stand-alone Azure resource that can be used to manage multiple resources. For more information, see [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md).
+ A user-assigned managed identity is a stand-alone Azure resource that can be used to manage multiple resources. For more information, see [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md).
If you need to create a new managed identity, follow the guidance in [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../../articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
-
+ ![Select a user identity](./media/data-box-deploy-export-ordered/customer-managed-key-10.png) The user identity is shown in **Encryption type** settings.
Perform the following steps in the Azure portal to order a device.
![A selected user identity shown in Encryption type settings](./media/data-box-deploy-export-ordered/customer-managed-key-11.png)
-16. If you want to enable software-based double encryption, expand **Double encryption (for high-security environments)**, and select **Enable double encryption for the order**.
+16. If you want to enable software-based double encryption, expand **Double encryption (for high-security environments)**, and select **Enable double encryption for the order**.
The software-based encryption is performed in addition to the AES-256 bit encryption of the data on the Data Box.
Follow these guidelines to create your XML file if you choose to select blobs an
### [Sample XML file](#tab/sample-xml-file)
-This sample XML file includes examples of each XML tag that is used to select blobs and files for export in a Data Box export order.
+This sample XML file includes examples of each XML tag that is used to select blobs and files for export in a Data Box export order.
- For a XML file requirements, go to the **XML file overview** tab. - For more examples of valid blob and file prefixes, go to the **Prefix examples** tab.
This sample XML file includes examples of each XML tag that is used to select bl
<BlobPathPrefix>/container</BlobPathPrefix> <!-- Exports all containers beginning with prefix: "container" --> <BlobPathPrefix>/container1/2021Q2</BlobPathPrefix> <!-- Exports all blobs in container1 with prefix: "2021Q2" --> </BlobList>
-
+ <!--AzureFileList selects individual files (FilePath) and multiple files (FilePathPrefix) in Azure File storage for export.--> <AzureFileList> <FilePath>/fileshare1/file.txt</FilePath> <!-- Exports /fileshare1/file.txt -->
Data Box copies data from the source storage account(s). Once the data copy is c
![Data Box export order, data copy complete](media/data-box-deploy-export-ordered/azure-data-box-export-order-data-copy-complete.png)
-The data export from Azure Storage to your Data Box can sometimes fail. Make sure that the blobs aren't archive blobs as export of these blobs is not supported.
+The data export from Azure Storage to your Data Box can sometimes fail. Make sure that the blobs aren't archive blobs as export of these blobs is not supported.
> [!NOTE] > For archive blobs, you need rehydrate those blobs before they can be exported from Azure Storage account to your Data Box. For more information, see [Rehydrate an archive blob]( ../storage/blobs/storage-blob-rehydration.md).
databox Data Box Disk Deploy Upload Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-upload-verify.md
Previously updated : 12/17/2021 Last updated : 05/05/2022 # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.
When Microsoft receives and scans the disk, job status is updated to **Received*
The data automatically gets copied once the disks are connected to a server in the Azure datacenter. Depending upon the data size, the copy operation may take a few hours to days to complete. You can monitor the copy job progress in the portal.
-Once the copy is complete, order status updates to **Completed**. The **DATA COPY DETAILS** show the path to the copy log, which reports any errors during the data copy.
+Once the copy is complete, order status updates to **Completed**. The **DATA COPY DETAILS** show the path to the copy log, which reports any errors during the data copy.
-![Screenshot of the Overview pane for a Data Box Disk import order in Copy Completed state. The Overview option, Copy Completed order status, and Copy Log Path are highlighted.](media/data-box-disk-deploy-picked-up/data-box-portal-completed.png)
+As of March 2022, you can choose **View by Storage Account(s)** or **View by Managed Disk(s)** to display the data copy details.
+
+[![Screenshot of the Data Copy Details.](media/data-box-disk-deploy-picked-up/data-box-portal-completed.png)](media/data-box-disk-deploy-picked-up/data-box-portal-completed-inline.png#lightbox)
+
+If you have an order from before March 2022, the data copy details will be shown as below:
+ If the copy completes with errors, see [troubleshoot upload errors](data-box-disk-troubleshoot-upload.md).
databox Data Box Export Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-export-logs.md
Title: Track and log Azure Data Box, Azure Data Box Heavy events for export order| Microsoft Docs
+ Title: Track and log Azure Data Box, Azure Data Box Heavy events for export order| Microsoft Docs
description: Describes how to track and log events at the various stages of your Azure Data Box and Azure Data Box Heavy export order.
You can control who can access your order when the order is first created. Set u
The two roles that can be defined for the Azure Data Box service are: - **Data Box Reader** - have read-only access to an order(s) as defined by the scope. They can only view details of an order. They canΓÇÖt access any other details related to storage accounts or edit the order details such as address and so on.-- **Data Box Contributor** - can only create an order to transfer data to a given storage account *if they already have write access to a storage account*. If they do not have access to a storage account, they can't even create a Data Box order to copy data to the account. This role does not define any Storage account related permissions nor grants access to storage accounts.
+- **Data Box Contributor** - can only create an order to transfer data to a given storage account *if they already have write access to a storage account*. If they do not have access to a storage account, they can't even create a Data Box order to copy data to the account. This role does not define any Storage account related permissions nor grants access to storage accounts.
To restrict access to an order, you can:
You can track your order through the Azure portal and through the shipping carri
## Query activity logs during setup -- Your Data Box arrives on your premises in a locked state. You can use the device credentials available in the Azure portal for your order.
+- Your Data Box arrives on your premises in a locked state. You can use the device credentials available in the Azure portal for your order.
When a Data Box is set up, you may need to know who all accessed the device credentials. To figure out who accessed the **Device credentials** blade, you can query the Activity logs. Any action that involves accessing **Device details > Credentials** blade is logged into the activity logs as `ListCredentials` action.
You can track your order through the Azure portal and through the shipping carri
## View logs during data copy
-Before you copy data from your Data Box, you can download and review *copy log* and *verbose log* for the data that was copied to the Data Box. These logs are generated when the data is copied from your Storage account in Azure to your Data Box.
+Before you copy data from your Data Box, you can download and review *copy log* and *verbose log* for the data that was copied to the Data Box. These logs are generated when the data is copied from your Storage account in Azure to your Data Box.
### Copy log
Here is a sample output of *copy log* when there were no errors and all the file
<TotalFiles_Blobs>5521</TotalFiles_Blobs> <FilesErrored>0</FilesErrored> </CopyLog>
-```
-
+```
+ Here is a sample output when the *copy log* has errors and some of the files failed to copy from Azure. ```output
Here is a sample output when the *copy log* has errors and some of the files fai
<Status>Failed</Status> <TotalFiles_Blobs>4</TotalFiles_Blobs> <FilesErrored>3</FilesErrored>
-</CopyLog>
+</CopyLog>
```
-You have the following options to export those files:
+You have the following options to export those files:
-- You can transfer the files that could not be copied over the network.
+- You can transfer the files that could not be copied over the network.
- If your data size was larger than the usable device capacity, then a partial copy occurs and all the files that were not copied are listed in this log. You can use this log as an input XML to create a new Data Box order and then copy over these files. ### Verbose log
The verbose log has the information in the following format:
`<file size = "file-size-in-bytes" crc64="cyclic-redundancy-check-string">\folder-path-on-data-box\name-of-file-copied.md</file>`
-Here is a sample output of the verbose log.
+Here is a sample output of the verbose log.
```powershell <File CloudFormat="BlockBlob" Path="validblobdata/test1.2.3.4" Size="1024" crc64="7573843669953104266">
The copy log path is also displayed on the **Overview** blade for the portal.
<!-- add a screenshot-->
-You can use these logs to verify that files copied from Azure match the data that was copied to your on-premises server.
+You can use these logs to verify that files copied from Azure match the data that was copied to your on-premises server.
Use your verbose log file: - To verify against the actual names and the number of files that were copied from the Data Box. - To verify against the actual sizes of the files.-- To verify that the *crc64* corresponds to a non-zero string. A Cyclic Redundancy Check (CRC) computation is done during the export from Azure. The CRCs from the export and after the data is copied from Data Box to on-premise server can be compared. A CRC mismatch indicates that the corresponding files failed to copy properly.
+- To verify that the *crc64* corresponds to a non-zero string. A Cyclic Redundancy Check (CRC) computation is done during the export from Azure. The CRCs from the export and after the data is copied from Data Box to on-premises server can be compared. A CRC mismatch indicates that the corresponding files failed to copy properly.
## Get chain of custody logs after data erasure
New Logon:
Logon GUID: {00000000-0000-0000-0000-000000000000} Process Information: Process ID: 0x4
- Process Name:
+ Process Name:
Network Information: Workstation Name: - Source Network Address: -
Detailed Authentication Information:
Transited Package Name (NTLM only): - Key Length: 0
-This event is generated when a logon session is created. It is generated on the computer that was accessed.
-The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.
+This event is generated when a logon session is created. It is generated on the computer that was accessed.
+The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.
The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network). The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on. The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.
Here is a sample of the order history log from Azure portal:
- Microsoft Data Box Order Report -
-Name : gus-poland
-StartTime(UTC) : 9/19/2018 8:49:23 AM +00:00
-DeviceType : DataBox
+Name : gus-poland
+StartTime(UTC) : 9/19/2018 8:49:23 AM +00:00
+DeviceType : DataBox
- Data Box Activities -
Time(UTC) | Activity | Status | D
9/19/2018 8:49:26 AM | OrderCreated | Completed | 10/2/2018 7:32:53 AM | DevicePrepared | Completed |
-10/3/2018 1:36:43 PM | ShippingToCustomer | InProgress | Shipment picked up. Local Time : 10/3/2018 1:36:43 PM at AMSTERDAM-NLD
-10/4/2018 8:23:30 PM | ShippingToCustomer | InProgress | Processed at AMSTERDAM-NLD. Local Time : 10/4/2018 8:23:30 PM at AMSTERDAM-NLD
+10/3/2018 1:36:43 PM | ShippingToCustomer | InProgress | Shipment picked up. Local Time : 10/3/2018 1:36:43 PM at AMSTERDAM-NLD
+10/4/2018 8:23:30 PM | ShippingToCustomer | InProgress | Processed at AMSTERDAM-NLD. Local Time : 10/4/2018 8:23:30 PM at AMSTERDAM-NLD
10/4/2018 11:43:34 PM | ShippingToCustomer | InProgress | Departed Facility in AMSTERDAM-NLD. Local Time : 10/4/2018 11:43:34 PM at AMSTERDAM-NLD
-10/5/2018 8:13:49 AM | ShippingToCustomer | InProgress | Arrived at Delivery Facility in BRIGHTON-GBR. Local Time : 10/5/2018 8:13:49 AM at LAMBETH-GBR
-10/5/2018 9:13:24 AM | ShippingToCustomer | InProgress | With delivery courier. Local Time : 10/5/2018 9:13:24 AM at BRIGHTON-GBR
-10/5/2018 12:03:04 PM | ShippingToCustomer | Completed | Delivered - Signed for by. Local Time : 10/5/2018 12:03:04 PM at BRIGHTON-GBR
-1/25/2019 3:19:25 PM | ShippingToDataCenter | InProgress | Shipment picked up. Local Time : 1/25/2019 3:19:25 PM at BRIGHTON-GBR
-1/25/2019 8:03:55 PM | ShippingToDataCenter | InProgress | Processed at BRIGHTON-GBR. Local Time : 1/25/2019 8:03:55 PM at LAMBETH-GBR
-1/25/2019 8:04:58 PM | ShippingToDataCenter | InProgress | Departed Facility in BRIGHTON-GBR. Local Time : 1/25/2019 8:04:58 PM at BRIGHTON-GBR
-1/25/2019 9:06:09 PM | ShippingToDataCenter | InProgress | Arrived at Sort Facility LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:06:09 PM at LONDON-HEATHROW-GBR
-1/25/2019 9:48:54 PM | ShippingToDataCenter | InProgress | Processed at LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:48:54 PM at LONDON-HEATHROW-GBR
+10/5/2018 8:13:49 AM | ShippingToCustomer | InProgress | Arrived at Delivery Facility in BRIGHTON-GBR. Local Time : 10/5/2018 8:13:49 AM at LAMBETH-GBR
+10/5/2018 9:13:24 AM | ShippingToCustomer | InProgress | With delivery courier. Local Time : 10/5/2018 9:13:24 AM at BRIGHTON-GBR
+10/5/2018 12:03:04 PM | ShippingToCustomer | Completed | Delivered - Signed for by. Local Time : 10/5/2018 12:03:04 PM at BRIGHTON-GBR
+1/25/2019 3:19:25 PM | ShippingToDataCenter | InProgress | Shipment picked up. Local Time : 1/25/2019 3:19:25 PM at BRIGHTON-GBR
+1/25/2019 8:03:55 PM | ShippingToDataCenter | InProgress | Processed at BRIGHTON-GBR. Local Time : 1/25/2019 8:03:55 PM at LAMBETH-GBR
+1/25/2019 8:04:58 PM | ShippingToDataCenter | InProgress | Departed Facility in BRIGHTON-GBR. Local Time : 1/25/2019 8:04:58 PM at BRIGHTON-GBR
+1/25/2019 9:06:09 PM | ShippingToDataCenter | InProgress | Arrived at Sort Facility LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:06:09 PM at LONDON-HEATHROW-GBR
+1/25/2019 9:48:54 PM | ShippingToDataCenter | InProgress | Processed at LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:48:54 PM at LONDON-HEATHROW-GBR
1/25/2019 10:30:20 PM | ShippingToDataCenter | InProgress | Departed Facility in LONDON-HEATHROW-GBR. Local Time : 1/25/2019 10:30:20 PM at LONDON-HEATHROW-GBR
-1/28/2019 7:11:35 AM | ShippingToDataCenter | InProgress | Arrived at Delivery Facility in AMSTERDAM-NLD. Local Time : 1/28/2019 7:11:35 AM at AMSTERDAM-NLD
-1/28/2019 9:07:57 AM | ShippingToDataCenter | InProgress | With delivery courier. Local Time : 1/28/2019 9:07:57 AM at AMSTERDAM-NLD
-1/28/2019 1:35:56 PM | ShippingToDataCenter | InProgress | Scheduled for delivery. Local Time : 1/28/2019 1:35:56 PM at AMSTERDAM-NLD
+1/28/2019 7:11:35 AM | ShippingToDataCenter | InProgress | Arrived at Delivery Facility in AMSTERDAM-NLD. Local Time : 1/28/2019 7:11:35 AM at AMSTERDAM-NLD
+1/28/2019 9:07:57 AM | ShippingToDataCenter | InProgress | With delivery courier. Local Time : 1/28/2019 9:07:57 AM at AMSTERDAM-NLD
+1/28/2019 1:35:56 PM | ShippingToDataCenter | InProgress | Scheduled for delivery. Local Time : 1/28/2019 1:35:56 PM at AMSTERDAM-NLD
1/28/2019 2:57:48 PM | ShippingToDataCenter | Completed | Delivered - Signed for by. Local Time : 1/28/2019 2:57:48 PM at AMSTERDAM-NLD 1/29/2019 2:18:43 PM | PhysicalVerification | Completed | 1/29/2019 3:49:50 PM | DeviceBoot | Completed | Appliance booted up successfully.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
If you've [configured automations](workflow-automation.md) or defined [alert sup
Access from a Tor exit node might indicate a threat actor trying to hide their identity.
-The alert is now tuned to generate only for authenticated access, which results in higher accuracy and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
+The alert is now tuned to generate only for authenticated access, which results in higher accuracy and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
An outlying pattern will have high severity, while less anomalous patterns will have medium severity.
Learn more about how to [Explore and manage your resources with asset inventory]
Updates in January include: - [Azure Security Benchmark is now the default policy initiative for Azure Security Center](#azure-security-benchmark-is-now-the-default-policy-initiative-for-azure-security-center)-- [Vulnerability assessment for on-premise and multicloud machines is released for general availability (GA)](#vulnerability-assessment-for-on-premise-and-multicloud-machines-is-released-for-general-availability-ga)
+- [Vulnerability assessment for on-premises and multicloud machines is released for general availability (GA)](#vulnerability-assessment-for-on-premises-and-multicloud-machines-is-released-for-general-availability-ga)
- [Secure score for management groups is now available in preview](#secure-score-for-management-groups-is-now-available-in-preview) - [Secure score API is released for general availability (GA)](#secure-score-api-is-released-for-general-availability-ga) - [Dangling DNS protections added to Azure Defender for App Service](#dangling-dns-protections-added-to-azure-defender-for-app-service)
To learn more, see the following pages:
- [Learn more about Azure Security Benchmark](/security/benchmark/azure/introduction) - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md)
-### Vulnerability assessment for on-premise and multicloud machines is released for general availability (GA)
+### Vulnerability assessment for on-premises and multicloud machines is released for general availability (GA)
In October, we announced a preview for scanning Azure Arc-enabled servers with [Azure Defender for Servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys).
The Security Center experience within SQL provides access to the following Secur
- **Security recommendations** ΓÇô Security Center periodically analyzes the security state of all connected Azure resources to identify potential security misconfigurations. It then provides recommendations on how to remediate those vulnerabilities and improve organizationsΓÇÖ security posture. - **Security alerts** ΓÇô a detection service that continuously monitors Azure SQL activities for threats such as SQL injection, brute-force attacks, and privilege abuse. This service triggers detailed and action-oriented security alerts in Security Center and provides options for continuing investigations with Azure Sentinel, MicrosoftΓÇÖs Azure-native SIEM solution.-- **Findings** ΓÇô a vulnerability assessment service that continuously monitors Azure SQL configurations and helps remediate vulnerabilities. Assessment scans provide an overview of Azure SQL security states together with detailed security findings.
+- **Findings** ΓÇô a vulnerability assessment service that continuously monitors Azure SQL configurations and helps remediate vulnerabilities. Assessment scans provide an overview of Azure SQL security states together with detailed security findings.
:::image type="content" source="media/release-notes/microsoft-defender-for-cloud-experience-in-sql.png" alt-text="Azure Security Center's security features for SQL are available from within Azure SQL":::
You can now see whether or not your subscriptions have the default Security Cent
Updates in October include: -- [Vulnerability assessment for on-premise and multicloud machines (preview)](#vulnerability-assessment-for-on-premise-and-multicloud-machines-preview)
+- [Vulnerability assessment for on-premises and multicloud machines (preview)](#vulnerability-assessment-for-on-premises-and-multicloud-machines-preview)
- [Azure Firewall recommendation added (preview)](#azure-firewall-recommendation-added-preview) - [Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix](#authorized-ip-ranges-should-be-defined-on-kubernetes-services-recommendation-updated-with-quick-fix) - [Regulatory compliance dashboard now includes option to remove standards](#regulatory-compliance-dashboard-now-includes-option-to-remove-standards) - [Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)](#microsoftsecuritysecuritystatuses-table-removed-from-azure-resource-graph-arg)
-### Vulnerability assessment for on-premise and multicloud machines (preview)
+### Vulnerability assessment for on-premises and multicloud machines (preview)
[Azure Defender for Servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys) now scans Azure Arc-enabled servers.
properties: {
Query that references SecurityStatuses: ```kusto
-SecurityResources
+SecurityResources
| where type == 'microsoft.security/securitystatuses' and properties.type == 'virtualMachine'
-| where name in ({vmnames})
+| where name in ({vmnames})
| project name, resourceGroup, policyAssesments = properties.policyAssessments, resourceRegion = location, id, resourceDetails = properties.resourceDetails ```
source =~ "aws", properties.additionalData.AzureResourceId,
source =~ "gcp", properties.additionalData.AzureResourceId, extract("^(.+)/providers/Microsoft.Security/assessments/.+$",1,id))))) | extend resourceGroup = tolower(tostring(split(resourceId, "/")[4]))
-| where resourceName in ({vmnames})
+| where resourceName in ({vmnames})
| project resourceName, resourceGroup, resourceRegion = location, id, resourceDetails = properties.additionalData ```
Custom policies are now part of the Security Center recommendations experience,
Create a custom initiative in Azure Policy, add policies to it and onboard it to Azure Security Center, and visualize it as recommendations.
-We've now also added the option to edit the custom recommendation metadata. Metadata options include severity, remediation steps, threats information, and more.
+We've now also added the option to edit the custom recommendation metadata. Metadata options include severity, remediation steps, threats information, and more.
Learn more about [enhancing your custom recommendations with detailed information](custom-security-policies.md#enhance-your-custom-recommendations-with-detailed-information).
Now, you can add standards such as:
- **Canada Federal PBMM** - **Azure CIS 1.1.0 (new)** (which is a more complete representation of Azure CIS 1.1.0)
-In addition, we've recently added the [Azure Security Benchmark](/security/benchmark/azure/introduction), the Microsoft-authored Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. Additional standards will be supported in the dashboard as they become available.
+In addition, we've recently added the [Azure Security Benchmark](/security/benchmark/azure/introduction), the Microsoft-authored Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. Additional standards will be supported in the dashboard as they become available.
Learn more about [customizing the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | | [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | | [Network-based security alerts](other-threat-protections.md#network-layer) | - | - |
-| [Just-in-time VM access](just-in-time-access-usage.md) | - | - |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö (Preview) | - |
| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | | [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | | [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö |
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
When signing into a preconfigured sensor for the first time, you'll need to perf
1. Select either **CyberX** or **Support**, and copy the unique identifier.
-1. Navigate to the Azure portal and select **Sites and Sensors**.
+1. Navigate to the Azure portal and select **Sites and Sensors**.
1. Select the **More Actions** drop down menu and select **Recover on-premises management console password**.
When signing into a preconfigured sensor for the first time, you'll need to perf
1. Select **Next**, and your user, and system-generated password for your management console will then appear. > [!NOTE]
- > When you sign in to a sensor or on-premise management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX, or Support user you will need to select that subscription. For more information on recovering a CyberX, or Support user password, see [Recover the password for the on-premises management console, or the sensor](how-to-create-and-manage-users.md#recover-the-password-for-the-on-premises-management-console-or-the-sensor).
+ > When you sign in to a sensor or on-premises management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX, or Support user you will need to select that subscription. For more information on recovering a CyberX, or Support user password, see [Recover the password for the on-premises management console, or the sensor](how-to-create-and-manage-users.md#recover-the-password-for-the-on-premises-management-console-or-the-sensor).
### Investigate a lack of traffic
-An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users. When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture.
+An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users. When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture.
### Check system performance
When a new sensor is deployed or a sensor is working slowly or not showing any a
1. In the Defender for IoT dashboard > **Overview**, make sure that `PPS > 0`. 1. In *Devices** check that devices are being discovered.
-1. In **Data Mining**, generate a report.
+1. In **Data Mining**, generate a report.
1. In **Trends & Statistics** window, create a dashboard. 1. In **Alerts**, check that the alert was created.
-### Investigate a lack of expected alerts
+### Investigate a lack of expected alerts
If the **Alerts** window doesn't show an alert that you expected, verify the following:
To connect a sensor controlled by the management console to NTP:
Sometimes ICS devices are configured with external IP addresses. These ICS devices are not shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image. Another indication of the same problem is when multiple internet-related alerts appear. Fix the issue as follows:
-1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
+1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
1. Copy the public ranges that are private, and add them to the subnet list. Learn more about [configuring subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets). 1. Generate a new data-mining report for internet connections. 1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
If an expected alert is not shown in the **Alerts** window, verify the following
- Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert has not been handled yet, a new alert is not shown. -- Verify that you did not exclude this alert by using the **Alert Exclusion** rules in the on-premises management console.
+- Verify that you did not exclude this alert by using the **Alert Exclusion** rules in the on-premises management console.
### Tweak the Quality of Service (QoS) To save your network resources, you can limit the number of alerts sent to external systems (such as emails or SIEM) in one sync operation between an appliance and the on-premises management console.
-The default is 50. This means that in one communication session between an appliance and the on-premises management console, there will be no more than 50 alerts to external systems.
+The default is 50. This means that in one communication session between an appliance and the on-premises management console, there will be no more than 50 alerts to external systems.
To limit the number of alerts, use the `notifications.max_number_to_report` property available in `/var/cyberx/properties/management.properties`. No restart is needed after you change this property.
digital-twins Quickstart 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-3d-scenes-studio.md
The scene will look like this:
You'll need an Azure subscription to complete this quickstart. If you don't have one already, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
-You'll also need to download a sample 3D file to use for the scene in this quickstart. [Select this link to download RobotArms.glb](https://cardboardresources.blob.core.windows.net/public/RobotArms.glb).
+You'll also need to download a sample glTF (Graphics Language Transmission Format) 3D file to use for the scene in this quickstart. [Select this link to download RobotArms.glb](https://cardboardresources.blob.core.windows.net/public/RobotArms.glb).
## Set up Azure Digital Twins and sample data
You may also want to delete the downloaded sample 3D file from your local machin
Next, continue on to the Azure Digital Twins tutorials to build out your own Azure Digital Twins environment. > [!div class="nextstepaction"]
-> [Code a client app](tutorial-code.md)
+> [Code a client app](tutorial-code.md)
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
Title: "PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS"
-description: Learn to migrate an on-premise MySQL database to Azure Database for MySQL by using Azure Database Migration Service through PowerShell script.
+description: Learn to migrate an on-premises MySQL database to Azure Database for MySQL by using Azure Database Migration Service through PowerShell script.
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
Last updated 04/28/2022
-# Partner Events overview for partners - Azure Event Grid (preview)
+# Partner Events overview for partners - Azure Event Grid
Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Azure Storage, Outlook, Teams, or Azure AD, to partner systems where customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale. > [!NOTE]
Registrations are global. That is, they aren't associated with a particular Azur
### Channel A Channel is a nested resource to a Partner Namespace. A channel has two main purposes:
- - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is the customer's resource where events from a partner system. Similarly, when a channel of type `partner destination` is created, a partner destination is created on a customer's Azure subscription. Partner destinations are resources that represent a partner system endpoint to where events are delivered. A channel is the kind of resource, along with partner topics and partner destinations, that enable bi-directional event integration.
+ - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is the customer's resource where events from a partner system. Similarly, when a channel of type `partner destination` is created, a partner destination is created on a customer's Azure subscription. Partner destinations are resources that represent a partner system endpoint to where events are delivered. A channel along with partner topics and partner destinations enables bi-directional event integration.
A channel has the same lifecycle as its associated customer partner topic or destination. When a channel of type `partner topic` is deleted, for example, the associated customer's partner topic is deleted. Similarly, if the partner topic is deleted by the customer, the associated channel on your Azure subscription is deleted. - It's a resource that is used to route events. A channel of type ``partner topic`` is used to route events to a customer's partner topic. It supports two types of routing modes.
You have two options:
## References
- * [Swagger](https://github.com/ahamad-MS/azure-rest-api-specs/blob/master/specification/eventgrid/resource-manager/Microsoft.EventGrid/preview/2020-04-01-preview/EventGrid.json)
+ * [Swagger](https://github.com/ahamad-MS/azure-rest-api-specs/blob/main/specification/eventgrid/resource-manager/Microsoft.EventGrid/stable/2022-06-15/EventGrid.json)
* [ARM template](/azure/templates/microsoft.eventgrid/allversions)
- * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/master/schemas/2020-04-01-preview/Microsoft.EventGrid.json)
- * [REST APIs](/azure/templates/microsoft.eventgrid/2020-04-01-preview/partnernamespaces)
+ * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/main/schemas/2022-06-15/Microsoft.EventGrid.json)
+ * [REST APIs](/rest/api/eventgrid/controlplane-version2021-10-15-preview/partner-namespaces)
* [CLI extension](/cli/azure/eventgrid) ### SDKs * [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.EventGrid/5.3.1-preview)
- * [Python](https://pypi.org/project/azure-mgmt-eventgrid/3.0.0rc6/)
- * [Java](https://search.maven.org/artifact/com.microsoft.azure.eventgrid.v2020_04_01_preview/azure-mgmt-eventgrid/1.0.0-beta-3/jar)
- * [Ruby](https://rubygems.org/gems/azure_mgmt_event_grid/versions/0.19.0)
- * [JS](https://www.npmjs.com/package/@azure/arm-eventgrid/v/7.0.0)
+ * [Python](https://pypi.org/project/azure-mgmt-eventgrid/)
+ * [Java](https://search.maven.org/search?q=azure-mgmt-eventgrid)
+ * [Ruby](https://rubygems.org/gems/azure_mgmt_event_grid/)
+ * [JS](https://www.npmjs.com/package/@azure/arm-eventgrid)
* [Go](https://github.com/Azure/azure-sdk-for-go)
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
Last updated 03/31/2022
-# Partner Events overview for customers - Azure Event Grid (preview)
+# Partner Events overview for customers - Azure Event Grid
Azure Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams.
event-hubs Event Hubs Auto Inflate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-auto-inflate.md
Title: Automatically scale up throughput units in Azure Event Hubs description: Enable Auto-inflate on a namespace to automatically scale up throughput units (standard tier). Previously updated : 05/26/2021 Last updated : 06/13/2022 # Automatically scale up Azure Event Hubs throughput units (standard tier)
For a premium Event Hubs namespace, the feature is automatically enabled. You ca
## Use Azure portal In the Azure portal, you can enable the feature when creating a standard Event Hubs namespace or after the namespace is created. You can also set TUs for the namespace and specify maximum limit of TUs
-You can enable the Auto-inflate feature **when creating an Event Hubs namespace**. The follow image shows you how to enable the auto-inflate feature for a standard tier namespace and configure TUs to start with and the maximum number of TUs.
+You can enable the Auto-inflate feature **when creating an Event Hubs namespace**. The following image shows you how to enable the auto-inflate feature for a standard tier namespace and configure TUs to start with and the maximum number of TUs.
:::image type="content" source="./media/event-hubs-auto-inflate/event-hubs-auto-inflate.png" alt-text="Screenshot of enabling auto inflate at the time event hub creation for a standard tier namespace"::: With this option enabled, you can start small with your TUs and scale up as your usage needs increase. The upper limit for inflation doesn't immediately affect pricing, which depends on the number of TUs used per hour.
-To enable the Auto-inflate feature and modify its settings for an existing, follow these steps:
+To enable the Auto-inflate feature and modify its settings for an existing namespace, follow these steps:
1. On the **Event Hubs Namespace** page, select **Scale** under **Settings** on the left menu. 2. In the **Scale Settings** page, select the checkbox for **Enable** (if the autoscale feature wasn't enabled).
To enable the Auto-inflate feature and modify its settings for an existing, foll
## Use an Azure Resource Manager template
-You can enable Auto-inflate during an Azure Resource Manager template deployment. For example, set the
+You can enable the Auto-inflate feature during an Azure Resource Manager template deployment. For example, set the
`isAutoInflateEnabled` property to **true** and set `maximumThroughputUnits` to 10. For example: ```json
firewall-manager Rule Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/rule-hierarchy.md
Title: Use Azure Firewall policy to define a rule hierarchy
-description: Learn how to use Azure Firewall policy to define a rule hierarchy and enforce compliance.
+description: Learn how to use Azure Firewall policy to define a rule hierarchy and enforce compliance.
# Use Azure Firewall policy to define a rule hierarchy
-Security administrators need to manage firewalls and ensure compliance across on-premise and cloud deployments. A key component is the ability to provide application teams with flexibility to implement CI/CD pipelines to create firewall rules in an automated way.
+Security administrators need to manage firewalls and ensure compliance across on-premises and cloud deployments. A key component is the ability to provide application teams with flexibility to implement CI/CD pipelines to create firewall rules in an automated way.
Azure Firewall policy allows you to define a rule hierarchy and enforce compliance: - Provides a hierarchical structure to overlay a central base policy on top of a child application team policy. The base policy has a higher priority and runs before the child policy.-- Use an Azure custom role definition to prevent inadvertent base policy removal and provide selective access to rule collection groups within a subscription or resource group.
+- Use an Azure custom role definition to prevent inadvertent base policy removal and provide selective access to rule collection groups within a subscription or resource group.
## Solution overview The high-level steps for this example are:
-1. Create a base firewall policy in the security team resource group.
+1. Create a base firewall policy in the security team resource group.
3. Define IT security-specific rules in the base policy. This adds a common set of rules to allow/deny traffic.
-4. Create application team policies that inherit the base policy.
+4. Create application team policies that inherit the base policy.
5. Define application team-specific rules in the policy. You can also migrate rules from pre-existing firewalls. 6. Create Azure Active Directory custom roles to provide fine grained access to rule collection group and add roles at a Firewall Policy scope. In the following example, Sales team members can edit rule collection groups for the Sales teams Firewall Policy. The same applies to the Database and Engineering teams. 7. Associate the policy to the corresponding firewall. An Azure firewall can have only one assigned policy. This requires each application team to have their own firewall.
Create policies for each of the application teams:
:::image type="content" source="media/rule-hierarchy/policy-hierarchy.png" alt-text="Policy hierarchy" border="false":::
-### Create custom roles to access the rule collection groups
+### Create custom roles to access the rule collection groups
Custom roles are defined for each application team. The role defines operations and scope. The application teams are allowed to edit rule collection groups for their respective applications.
Use the following high-level procedure to define custom roles:
2. Run the following command: `Get-AzProviderOperation "Microsoft.Support/*" | FT Operation, Description -AutoSize`
-3. Use the Get-AzRoleDefinition command to output the Reader role in JSON format.
+3. Use the Get-AzRoleDefinition command to output the Reader role in JSON format.
`Get-AzRoleDefinition -Name "Reader" | ConvertTo-Json | Out-File C:\CustomRoles\ReaderSupportRole.json` 4. Open the ReaderSupportRole.json file in an editor.
Use the following high-level procedure to define custom roles:
The following shows the JSON output. For information about the different properties, seeΓÇ»[Azure custom roles](../role-based-access-control/custom-roles.md). ```json
- {
-ΓÇ» "Name": "Reader",
-ΓÇ» "Id": "acdd72a7-3385-48ef-bd42-f606fba81ae7",
-ΓÇ» "IsCustom": false,
-ΓÇ» "Description": "Lets you view everything, but not make any changes.",
-ΓÇ» "Actions": [
-     "*/read"
-ΓÇ» ],
-ΓÇ» "NotActions": [],
-ΓÇ» "DataActions": [],
-ΓÇ» "NotDataActions": [],
-ΓÇ» "AssignableScopes": [
-     "/"
-ΓÇ» ]
- }
+ {
+ΓÇ» "Name": "Reader",
+ΓÇ» "Id": "acdd72a7-3385-48ef-bd42-f606fba81ae7",
+ΓÇ» "IsCustom": false,
+ΓÇ» "Description": "Lets you view everything, but not make any changes.",
+ΓÇ» "Actions": [
+     "*/read"
+ΓÇ» ],
+ΓÇ» "NotActions": [],
+ΓÇ» "DataActions": [],
+ΓÇ» "NotDataActions": [],
+ΓÇ» "AssignableScopes": [
+     "/"
+ΓÇ» ]
+ }
``` 5. Edit the JSON file to add theΓÇ»
- `*/read", "Microsoft.Network/*/read", "Microsoft.Network/firewallPolicies/ruleCollectionGroups/write`
+ `*/read", "Microsoft.Network/*/read", "Microsoft.Network/firewallPolicies/ruleCollectionGroups/write`
operation to the **Actions** property. Be sure to include a comma after the read operation. This action allows the user to create and update rule collection groups. 6. In **AssignableScopes**, add your subscription ID with the following format: 
Use the following high-level procedure to define custom roles:
Your JSON file should look similar to the following example: ```
-{
-
-    "Name":  "AZFM Rule Collection Group Author",
-    "IsCustom":  true,
-    "Description":  "Users in this role can edit Firewall Policy rule collection groups",
-    "Actions":  [
-                    "*/read",
-                    "Microsoft.Network/*/read",
-                     "Microsoft.Network/firewallPolicies/ruleCollectionGroups/write"
-                ],
-    "NotActions":  [
-                   ],
-    "DataActions":  [
-                    ],
-    "NotDataActions":  [
-                       ],
-    "AssignableScopes":  [
-                             "/subscriptions/xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx"]
-}
+{
+
+    "Name":  "AZFM Rule Collection Group Author",
+    "IsCustom":  true,
+    "Description":  "Users in this role can edit Firewall Policy rule collection groups",
+    "Actions":  [
+                    "*/read",
+                    "Microsoft.Network/*/read",
+                     "Microsoft.Network/firewallPolicies/ruleCollectionGroups/write"
+                ],
+    "NotActions":  [
+                   ],
+    "DataActions":  [
+                    ],
+    "NotDataActions":  [
+                       ],
+    "AssignableScopes":  [
+                             "/subscriptions/xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx"]
+}
```
-9. To create the new custom role, use the New-AzRoleDefinition command and specify the JSON role definition file.
+9. To create the new custom role, use the New-AzRoleDefinition command and specify the JSON role definition file.
`New-AzRoleDefinition -InputFile "C:\CustomRoles\RuleCollectionGroupRole.json`
Users donΓÇÖt have permissions to:
- Update firewall policy hierarchy or DNS settings or threat intelligence. - Update firewall policy where they are not members of AZFM Rule Collection Group Author group.
-Security administrators can use base policy to enforce guardrails and block certain types of traffic (for example ICMP) as required by their enterprise.
+Security administrators can use base policy to enforce guardrails and block certain types of traffic (for example ICMP) as required by their enterprise.
## Next steps
firewall-manager Secure Hybrid Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-hybrid-network.md
Title: 'Tutorial: Secure your hub virtual network using Azure Firewall Manager'
-description: In this tutorial, you learn how to secure your virtual network with Azure Firewall Manager using the Azure portal.
+description: In this tutorial, you learn how to secure your virtual network with Azure Firewall Manager using the Azure portal.
In this tutorial, you learn how to:
## Prerequisites
-A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure VNets and on-premise networks. The hub-and-spoke architecture has the following requirements:
+A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure VNets and on-premises networks. The hub-and-spoke architecture has the following requirements:
-- Set **AllowGatewayTransit** when peering VNet-Hub to VNet-Spoke. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.
+- Set **AllowGatewayTransit** when peering VNet-Hub to VNet-Spoke. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.
Additionally, routes to the gateway-connected virtual networks or on-premises networks will automatically propagate to the routing tables for the peered virtual networks using the gateway transit. For more information, see [Configure VPN gateway transit for virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md).
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **Destination Type**, select **IP Address**. 1. For **Destination**, type **10.6.0.0/16**. 1. On the next rule row, enter the following information:
-
+ Name: type **AllowRDP**<br> Source: type **192.168.1.0/24**.<br> Protocol, select **TCP**<br>
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **IPv4 address space**, type **10.5.0.0/16**. 1. Under **Subnet name**, select **default**. 1. Change the **Subnet name** to **AzureFirewallSubnet**. The firewall is in this subnet, and the subnet name **must** be AzureFirewallSubnet.
-1. For **Subnet address range**, type **10.5.0.0/26**.
+1. For **Subnet address range**, type **10.5.0.0/26**.
1. Accept the other default settings, and then select **Save**. 1. Select **Review + create**. 1. Select **Create**.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **IPv4 address space**, type **10.6.0.0/16**. 1. Under **Subnet name**, select **default**. 1. Change the **Subnet name** to **SN-Workload**.
-1. For **Subnet address range**, type **10.6.0.0/24**.
+1. For **Subnet address range**, type **10.6.0.0/24**.
1. Accept the other default settings, and then select **Save**. 1. Select **Review + create**. 1. Select **Create**.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **IPv4 address space**, type **192.168.0.0/16**. 1. Under **Subnet name**, select **default**. 1. Change the **Subnet name** to **SN-Corp**.
-1. For **Subnet address range**, type **192.168.1.0/24**.
+1. For **Subnet address range**, type **192.168.1.0/24**.
1. Accept the other default settings, and then select **Save**. 2. Select **Add Subnet**. 3. For **Subnet name**, type **GatewaySubnet**.
Now peer the hub and spoke virtual networks.
2. In the left column, select **Peerings**. 3. Select **Add**. 4. Under **This virtual network**:
-
-
++ |Setting name |Value | ||| |Peering link name| HubtoSpoke| |Traffic to remote virtual network| Allow (default) | |Traffic forwarded from remote virtual network | Allow (default) | |Virtual network gateway or route server | Use this virtual network's gateway |
-
+ 5. Under **Remote virtual network**: |Setting name |Value |
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
+
+ Title: Create an Azure Front Door Standard/Premium with the Azure CLI
+description: Learn how to create an Azure Front Door Standard/Premium with Azure CLI. Use Azure Front Door to deliver content to your global user base and protect your web apps against vulnerabilities.
++++ Last updated : 6/13/2022++++
+# Quickstart: Create an Azure Front Door Standard/Premium - Azure CLI
+
+In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using Azure CLI. You'll create this profile using two Web Apps as your origin, and add a WAF security policy. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
+++
+## Create a resource group
+
+In Azure, you allocate related resources to a resource group. You can either use an existing resource group or create a new one.
+
+Run [az group create](/cli/azure/group) to create resource groups.
+
+```azurecli
+az group create --name myRGFD --location centralus
+```
+## Create an Azure Front Door profile
+
+Run [az afd profile create](/cli/azure/afd/profile#az-afd-profile-create) to create an Azure Front Door profile.
+
+> [!NOTE]
+> If you want to deploy Azure Front Door Standard instead of Premium substitute the value of the sku parameter with Standard_AzureFrontDoor. You won't be able to deploy managed rules with WAF Policy, if you choose Standard SKU. For detailed comparison, view [Azure Front Door tier comparison](standard-premium/tier-comparison.md).
+
+```azurecli
+az afd profile create \
+ --profile-name contosoafd \
+ --resource-group myRGFD \
+ --sku Premium_AzureFrontDoor
+```
+
+## Create two instances of a web app
+
+You need two instances of a web application that run in different Azure regions for this tutorial. Both the web application instances run in Active/Active mode, so either one can service traffic.
+
+If you don't already have a web app, use the following script to set up two example web apps.
+
+### Create app service plans
+
+Before you can create the web apps you'll need two app service plans, one in *Central US* and the second in *East US*.
+
+Run [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create&preserve-view=true) to create your app service plans.
+
+```azurecli
+az appservice plan create \
+ --name myAppServicePlanCentralUS \
+ --resource-group myRGFD
+
+az appservice plan create \
+ --name myAppServicePlanEastUS \
+ --resource-group myRGFD
+```
+
+### Create web apps
+
+Run [az webapp create](/cli/azure/webapp#az-webapp-create&preserve-view=true) to create a web app in each of the app service plans in the previous step. Web app names have to be globally unique.
+
+```azurecli
+az webapp create \
+ --name WebAppContoso-01 \
+ --resource-group myRGFD \
+ --plan myAppServicePlanCentralUS
+
+az webapp create \
+ --name WebAppContoso-02 \
+ --resource-group myRGFD \
+ --plan myAppServicePlanEastUS
+```
+
+Make note of the default host name of each web app so you can define the backend addresses when you deploy the Front Door in the next step.
+
+## Add an endpoint
+
+Run [az afd endpoint create](/cli/azure/afd/endpoint#az-afd-endpoint-create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience.
+
+```azurecli
+az afd endpoint create \
+ --resource-group myRGFD \
+ --endpoint-name contosofrontend \
+ --profile-name contosoafd \
+ --enabled-state Enabled
+```
+
+## Create an origin group
+
+Run [az afd origin-group create](/cli/azure/afd/origin-group#az-afd-origin-group-create) to create an origin group that contains your two web apps.
+
+```azurecli
+az afd origin-group create \
+ --resource-group myRGFD \
+ --origin-group-name og2 \
+ --profile-name contosoafd \
+ --probe-request-type GET \
+ --probe-protocol Http \
+ --probe-interval-in-seconds 60 \
+ --probe-path / \
+ --sample-size 4 \
+ --successful-samples-required 3 \
+ --additional-latency-in-milliseconds 50
+```
+
+## Add an origin to the group
+
+Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to add an origin to your origin group.
+
+```azurecli
+az afd origin create \
+ --resource-group myRGFD \
+ --host-name webappcontoso-01.azurewebsites.net \
+ --profile-name contosoafd \
+ --origin-group-name og \
+ --origin-name contoso1 \
+ --origin-host-header webappcontoso-01.azurewebsites.net \
+ --priority 1 \
+ --weight 1000 \
+ --enabled-state Enabled \
+ --http-port 80 \
+ --https-port 443
+```
+
+Repeat this step and add your second origin.
+
+```azurecli
+az afd origin create \
+ --resource-group myRGFD \
+ --host-name webappcontoso-02.azurewebsites.net \
+ --profile-name contosoafd \
+ --origin-group-name og \
+ --origin-name contoso2 \
+ --origin-host-header webappcontoso-02.azurewebsites.net \
+ --priority 1 \
+ --weight 1000 \
+ --enabled-state Enabled \
+ --http-port 80 \
+ --https-port 443
+```
+
+## Add a route
+
+Run [az afd route create](/cli/azure/afd/route#az-afd-route-create) to map your endpoint to the origin group. This route forwards requests from the endpoint to your origin group.
+
+```azurecli
+az afd route create \
+ --resource-group myRGFD \
+ --profile-name contosoafd \
+ --endpoint-name contosofrontend \
+ --forwarding-protocol MatchRequest \
+ --route-name route \
+ --https-redirect Enabled \
+ --origin-group og \
+ --supported-protocols Http Https \
+ --link-to-default-domain Enabled
+```
+
+## Create a new security policy
+
+### Create a WAF policy
+
+Run [az network front-door waf-policy create](/cli/azure/network/front-door/waf-policy#az-network-front-door-waf-policy-create) to create a new WAF policy for your Front Door. This example creates a policy that is enabled and in prevention mode.
+
+> [!NOTE]
+> Managed rules will only work with Front Door Premium SKU. You can opt for Standard SKU below to use custom rules.
+
+```azurecli
+az network front-door waf-policy create \
+ --name contosoWAF \
+ --resource-group myRGFD \
+ --sku Premium_AzureFrontDoor \
+ --disabled false \
+ --mode Prevention
+```
+
+> [!NOTE]
+> If you select `Detection` mode, your WAF doesn't block any requests.
+
+### Assign managed rules to the WAF policy
+Run [az network front-door waf-policy managed-rules add](/cli/azure/network/front-door/waf-policy/managed-rules#az-network-front-door-waf-policy-managed-rules-add) to add managed rules to your WAF Policy. This example adds Microsoft_DefaultRuleSet_1.2 and Microsoft_BotManagerRuleSet_1.0 to your policy.
++
+```azurecli
+az network front-door waf-policy managed-rules add \
+ --policy-name contosoWAF \
+ --resource-group myRGFD \
+ --type Microsoft_DefaultRuleSet \
+ --version 1.2
+```
+
+```azurecli
+az network front-door waf-policy managed-rules add \
+ --policy-name contosoWAF \
+ --resource-group myRGFD \
+ --type Microsoft_BotManagerRuleSet \
+ --version 1.0
+```
+### Create the security policy
+
+Run [az afd security-policy create](/cli/azure/afd/security-policy#az-afd-security-policy-create) to apply your WAF policy to the endpoint's default domain.
+
+> [!NOTE]
+> Substitute 'mysubscription' with your Azure Subscription ID in the domains and waf-policy parameters below. Run [az account subscription list](/cli/azure/aaccount/subscription#az-account-subscription-list) to get Subscription ID details.
++
+```azurecli
+az afd security-policy create \
+ --resource-group myRGFD \
+ --profile-name contosoafd \
+ --security-policy-name contososecurity \
+ --domains /subscriptions/mysubscription/resourcegroups/myRGFD/providers/Microsoft.Cdn/profiles/contosoafd/afdEndpoints/contosofrontend \
+ --waf-policy /subscriptions/mysubscription/resourcegroups/myRGFD/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/contosoWAF
+```
+
+## Verify Azure Front Door
+
+When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created.
+
+Run [az afd endpoint show](/cli/azure/afd/endpoint#az-afd-endpoint-show) to get the hostname of the Front Door endpoint.
+
+```azurecli
+az afd endpoint show --resource-group myRGFD --profile-name contosoafd --endpoint-name contosofrontend
+```
+In a browser, go to the endpoint hostname: `contosofrontend-<hash>.z01.azurefd.net`. Your request will automatically get routed to the least latent Web App in the origin group.
++
+To test instant global failover, we'll use the following steps:
+
+1. Open a browser, as described above, and go to the endpoint hostname: `contosofrontend-<hash>.z01.azurefd.net`.
+
+2. Stop one of the Web Apps by running [az webapp stop](/cli/azure/webapp#az-webapp-stop&preserve-view=true)
+
+ ```azurecli
+ az webapp stop --name WebAppContoso-01 --resource-group myRGFD
+ ```
+
+3. Refresh your browser. You should see the same information page.
+
+> [!TIP]
+> There is a little bit of delay for these actions. You might need to refresh again.
+
+4. Find the other web app, and stop it as well.
+
+ ```azurecli
+ az webapp stop --name WebAppContoso-02 --resource-group myRGFD
+ ```
+
+5. Refresh your browser. This time, you should see an error message.
+
+ :::image type="content" source="./media/create-front-door-portal/web-app-stopped-message.png" alt-text="Screenshot of the message: Both instances of the web app stopped":::
++
+6. Restart one of the Web Apps by running [az webapp start](/cli/azure/webapp#az-webapp-start&preserve-view=true). Refresh your browser and the page will go back to normal.
+
+ ```azurecli
+ az webapp start --name WebAppContoso-01 --resource-group myRGFD
+ ```
+
+## Clean up resources
+
+When you don't need the resources for the Front Door, delete both resource groups. Deleting the resource groups also deletes the Front Door and all its related resources.
+
+Run [az group delete](/cli/azure/group#az-group-delete&preserve-view=true):
+
+```azurecli
+az group delete --name myRGFD
+```
+
+## Next steps
+
+Advance to the next article to learn how to add a custom domain to your Front Door.
+> [!div class="nextstepaction"]
+> [Add a custom domain](standard-premium/how-to-add-custom-domain.md)
frontdoor Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/origin.md
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-classic" > [!NOTE]
-> An *Origin* and a *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
+> *Origin* and *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
> ::: zone-end
For more information, see [Least latency based routing method](routing-methods.m
- Learn how to [create an Azure Front Door (classic) profile](quickstart-create-front-door.md). - Learn about [Azure Front Door (classic) routing architecture](front-door-routing-architecture.md?pivots=front-door-classic).
frontdoor Quickstart Create Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door.md
documentationcenter: na
Previously updated : 04/19/2021 Last updated : 06/08/2022
If you don't already have a web app, use the following steps to set up example w
1. Sign in to the Azure portal at https://portal.azure.com.
-1. On the top left-hand side of the screen, select **Create a resource** > **WebApp**.
+1. On the top left-hand side of the screen, select **Create a resource** > **Web App**.
- :::image type="content" source="media/quickstart-create-front-door/front-door-create-web-app.png" alt-text="Create a web app in the Azure portal":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-create-web-app.png" alt-text="Create a web app in the Azure portal." lightbox="./media/quickstart-create-front-door/front-door-create-web-app.png":::
1. In the **Basics** tab of **Create Web App** page, enter or select the following information.
If you don't already have a web app, use the following steps to set up example w
| **Resource group** | Select **Create new** and enter *FrontDoorQS_rg1* in the text box.| | **Name** | Enter a unique **Name** for your web app. This example uses *WebAppContoso-1*. | | **Publish** | Select **Code**. |
- | **Runtime stack** | Select **.NET Core 2.1 (LTS)**. |
+ | **Runtime stack** | Select **.NET Core 3.1 (LTS)**. |
| **Operating System** | Select **Windows**. | | **Region** | Select **Central US**. | | **Windows Plan** | Select **Create new** and enter *myAppServicePlanCentralUS* in the text box. |
If you don't already have a web app, use the following steps to set up example w
1. Select **Review + create**, review the **Summary**, and then select **Create**. It might take several minutes for the deployment to complete.
- :::image type="content" source="media/quickstart-create-front-door/create-web-app.png" alt-text="Review summary for web app":::
+ :::image type="content" source="media/quickstart-create-front-door/create-web-app.png" alt-text="Review summary for web app." lightbox="./media/quickstart-create-front-door/create-web-app.png":::
After your deployment is complete, create a second web app. Use the same procedure with the same values, except for the following values:
After your deployment is complete, create a second web app. Use the same procedu
Configure Azure Front Door to direct user traffic based on lowest latency between the two web apps servers. To begin, add a frontend host for Azure Front Door.
-1. From the home page or the Azure menu, select **Create a resource**. Select **Networking** > **See All** > **Front Door**.
-
+1. From the home page or the Azure menu, select **Create a resource**. Select **Networking** > **See All** > **Front Door and CDN profiles**.
+1. On the Compare offerings page, select **Explore other offerings**. Then select **Azure Front Door (classic)**. Then select **Continue**.
1. In the **Basics** tab of **Create a Front Door** page, enter or select the following information, and then select **Next: Configuration**. | Setting | Value |
Configure Azure Front Door to direct user traffic based on lowest latency betwee
1. For **Host name**, enter a globally unique hostname. This example uses *contoso-frontend*. Select **Add**.
- :::image type="content" source="media/quickstart-create-front-door/add-frontend-host-azure-front-door.png" alt-text="Add a frontend host for Azure Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/add-frontend-host-azure-front-door.png" alt-text="Add a frontend host for Azure Front Door." lightbox="./media/quickstart-create-front-door/add-frontend-host-azure-front-door.png":::
Next, create a backend pool that contains your two web apps.
Next, create a backend pool that contains your two web apps.
1. For **Name**, enter *myBackendPool*, then select **Add a backend**.
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool.png" alt-text="Add a backend pool":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool.png" alt-text="Add a backend pool." lightbox="./media/quickstart-create-front-door/front-door-add-backend-pool.png":::
-1. In the **Add a backend** blade, select the following information and select **Add**.
+1. In the **Add a backend** pane, select the following information and select **Add**.
| Setting | Value | | | |
Next, create a backend pool that contains your two web apps.
| **Subscription** | Select your subscription. | | **Backend host name** | Select the first web app you created. In this example, the web app was *WebAppContoso-1*. |
- **Leave all other fields default.*
+ **Leave all other fields default.**
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-backend.png" alt-text="Add a backend host to your Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-backend.png" alt-text="Add a backend host to your Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-a-backend.png":::
1. Select **Add a backend** again. select the following information and select **Add**.
Next, create a backend pool that contains your two web apps.
| **Subscription** | Select your subscription. | | **Backend host name** | Select the second web app you created. In this example, the web app was *WebAppContoso-2*. |
- **Leave all other fields default.*
+ **Leave all other fields default.**
-1. Select **Add** on the **Add a backend pool** blade to complete the configuration of the backend pool.
+1. Select **Add** on the **Add a backend pool** pane to complete the configuration of the backend pool.
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool-complete.png" alt-text="Add a backend pool for Azure Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool-complete.png" alt-text="Add a backend pool for Azure Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-backend-pool-complete.png":::
Finally, add a routing rule. A routing rule maps your frontend host to the backend pool. The rule forwards a request for `contoso-frontend.azurefd.net` to **myBackendPool**.
Finally, add a routing rule. A routing rule maps your frontend host to the backe
1. In **Add a rule**, for **Name**, enter *LocationRule*. Accept all the default values, then select **Add** to add the routing rule.
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-rule.png" alt-text="Add a rule to your Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-rule.png" alt-text="Add a rule to your Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-a-rule.png":::
>[!WARNING] > You **must** ensure that each of the frontend hosts in your Front Door has a routing rule with a default path (`/*`) associated with it. That is, across all of your routing rules there must be at least one routing rule for each of your frontend hosts defined at the default path (`/*`). Failing to do so may result in your end-user traffic not getting routed correctly. 1. Select **Review + Create**, and then **Create**.
- :::image type="content" source="media/quickstart-create-front-door/configuration-azure-front-door.png" alt-text="Configured Azure Front Door":::
+ :::image type="content" source="media/quickstart-create-front-door/configuration-azure-front-door.png" alt-text="Configured Azure Front Door." lightbox="./media/quickstart-create-front-door/configuration-azure-front-door.png":::
+ ## View Azure Front Door in action
-Once you create a Front Door, it takes a few minutes for the configuration to be deployed globally. Once complete, access the frontend host you created. In a browser, go to `contoso-frontend.azurefd.net`. Your request will automatically get routed to the nearest server to you from the specified servers in the backend pool.
+Once you create a Front Door, it takes a few minutes for the configuration to be deployed globally. Once complete, access the frontend host you created. In a browser, go to your frontend host address. Your request will automatically get routed to the nearest server to you from the specified servers in the backend pool.
If you created these apps in this quickstart, you'll see an information page. To test instant global failover in action, try the following steps:
-1. Open a browser, as described above, and go to the frontend address: `contoso-frontend.azurefd.net`.
+1. Open the resource group **FrontDoorQS_rg0** and select the frontend service.
+
+ :::image type="content" source="./media/quickstart-create-front-door/front-door-view-frontend-service.png" alt-text="Screenshot of frontend service." lightbox="./media/quickstart-create-front-door/front-door-view-frontend-service.png":::
+
+1. From the **Overview** page, copy the **Frontend host** address.
+
+ :::image type="content" source="./media/quickstart-create-front-door/front-door-view-frontend-host-address.png" alt-text="Screenshot of frontend host address." lightbox="./media/quickstart-create-front-door/front-door-view-frontend-host-address.png":::
+
+1. Open a browser, as described above, and go to your frontend address.
1. In the Azure portal, search for and select *App services*. Scroll down to find one of your web apps, **WebAppContoso-1** in this example.
To test instant global failover in action, try the following steps:
1. Refresh your browser. This time, you should see an error message.
- :::image type="content" source="media/quickstart-create-front-door/web-app-stopped-message.png" alt-text="Both instances of the web app stopped":::
+ :::image type="content" source="media/quickstart-create-front-door/web-app-stopped-message.png" alt-text="Both instances of the web app stopped." lightbox="./media/quickstart-create-front-door/web-app-stopped-message.png":::
## Clean up resources
After you're done, you can remove all the items you created. Deleting a resource
1. Select the resource group, then select **Delete resource group**. >[!WARNING]
- >This action is irreversable.
+ >This action is irreversible.
1. Type the resource group name to verify, and then select **Delete**.
frontdoor Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/routing-methods.md
The weighted method enables some useful scenarios:
By default, without session affinity, Azure Front Door forwards requests originating from the same client to different origins. Certain stateful applications or in certain scenarios when ensuing requests from the same user prefers the same origin to process the initial request. The cookie-based session affinity feature is useful when you want to keep a user session on the same origin. When you use managed cookies with SHA256 of the origin URL as the identifier in the cookie, Azure Front Door can direct ensuing traffic from a user session to the same origin for processing.
-Session affinity can be enabled the origin group level in Azure Front Door Standard and Premium tier and front end host level in Azure Front Door (classic) for each of your configured domains (or subdomains). Once enabled, Azure Front Door adds a cookie to the user's session. Cookie-based session affinity allows Front Door to identify different users even if behind the same IP address, which in turn allows a more even distribution of traffic between your different origins.
+Session affinity can be enabled the origin group level in Azure Front Door Standard and Premium tier and front end host level in Azure Front Door (classic) for each of your configured domains (or subdomains). Once enabled, Azure Front Door adds a cookie to the user's session. The cookies are called ASLBSA and ASLBSACORS. Cookie-based session affinity allows Front Door to identify different users even if behind the same IP address, which in turn allows a more even distribution of traffic between your different origins.
The lifetime of the cookie is the same as the user's session, as Front Door currently only supports session cookie.
The lifetime of the cookie is the same as the user's session, as Front Door curr
> > Public proxies may interfere with session affinity. This is because establishing a session requires Front Door to add a session affinity cookie to the response, which cannot be done if the response is cacheable as it would disrupt the cookies of other clients requesting the same resource. To protect against this, session affinity will **not** be established if the origin sends a cacheable response when this is attempted. If the session has already been established, it does not matter if the response from the origin is cacheable. >
-> Session affinity will be established in the following circumstances, **unless** the response has an HTTP 304 status code:
-> - The response has specific values set for the `Cache-Control` header that prevents caching, such as *private* or *no-store*.
-> - The response contains an `Authorization` header that has not expired.
-> - The response has an HTTP 302 status code.
+> Session affinity will be established in the following circumstances:
+> - The response must include the `Cache-Control` header of *no-store*.
+> - If the response contains an `Authorization` header, it must not be expired.
+> - The response is an HTTP 302 status code.
## Next steps
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md
Previously updated : 03/22/2022 Last updated : 06/09/2022 zone_pivot_groups: front-door-tiers
The **request path** match condition identifies requests that include the specif
### Properties +
+| Property | Supported values |
+|-|-|
+| Operator | <ul><li>Any operator from the [standard operator list](#operator-list).</li><li>**Wildcard**: Matches when the request path matches a wildcard expression. A wildcard expression can include the `*` character to match zero or more characters within the path. For example, the wildcard expression `files/customer*/file.pdf` matches the paths `files/customer1/file.pdf`, `files/customer109/file.pdf`, and `files/customer/file.pdf`, but does not match `files/customer2/anotherfile.pdf`.<ul><li>In the Azure portal: `Wildcards`, `Not Wildcards`</li><li>In ARM templates: `Wildcard`; use the `negateCondition` property to specify _Not Wildcards_</li></ul></li></ul> |
+| Value | One or more string or integer values representing the value of the request path to match. Don't include the leading slash. If multiple values are specified, they're evaluated using OR logic. |
+| Case transform | Any transform from the [standard string transforms list](#string-transform-list). |
+++ | Property | Supported values | |-|-| | Operator | Any operator from the [standard operator list](#operator-list). | | Value | One or more string or integer values representing the value of the request path to match. Don't include the leading slash. If multiple values are specified, they're evaluated using OR logic. | | Case transform | Any transform from the [standard string transforms list](#string-transform-list). | + ### Example In this example, we match all requests where the request file path begins with `files/secure/`. We transform the request file extension to lowercase before evaluating the match, so requests to `files/SECURE/` and other case variations will also trigger this match condition.
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 03/08/2022 Last updated : 06/09/2022 ++ # Azure Resource Graph sample queries by category
hdinsight Control Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/control-network-traffic.md
If you plan on using **network security groups** to control network traffic, per
1. Identify the Azure region that you plan to use for HDInsight. 2. Identify the service tags required by HDInsight for your region. There are multiple ways to obtain these service tags:
- 1. Consult the list of published service tags in [Network security group (NSG) service tags for Azure HDInsight](hdinsight-service-tags.md).
+ 1. Consult the list of published service tags in [Network security group (NSG) service tags for Azure HDInsight](hdinsight-service-tags.md).
2. If your region is not present in the list, use the [Service Tag Discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) to find a service tag for your region. 3. If you are unable to use the API, download the [service tag JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) and search for your desired region.
For more information on controlling outbound traffic from HDInsight clusters, se
### Forced tunneling to on-premises
-Forced tunneling is a user-defined routing configuration where all traffic from a subnet is forced to a specific network or location, such as your on-premises network or Firewall. Forced tunneling of all data transfer back to on-premise is _not_ recommended due to large volumes of data transfer and potential performance impact.
+Forced tunneling is a user-defined routing configuration where all traffic from a subnet is forced to a specific network or location, such as your on-premises network or Firewall. Forced tunneling of all data transfer back to on-premises is _not_ recommended due to large volumes of data transfer and potential performance impact.
-Customers who are interested to setup forced tunneling, should use [custom metastores](./hdinsight-use-external-metadata-stores.md) and setup the appropriate connectivity from the cluster subnet or on-premise network to these custom metastores.
+Customers who are interested to setup forced tunneling, should use [custom metastores](./hdinsight-use-external-metadata-stores.md) and setup the appropriate connectivity from the cluster subnet or on-premises network to these custom metastores.
To see an example of the UDR setup with Azure Firewall, see [Configure outbound network traffic restriction for Azure HDInsight clusters](hdinsight-restrict-outbound-traffic.md).
hdinsight General Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/general-guidelines.md
Use a new resource group for each cluster so that you can distinguish between cl
[Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md) (Azure AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos / NTLM authentication that is fully compatible with Windows Server Active Directory. Azure AD DS is required for secure clusters to join a domain.
-HDInsight can't depend on on-premise domain controllers or custom domain controllers, as it introduces too many fault points, credential sharing, DNS permissions, and so on. For more information, see [Azure AD DS FAQs](../../active-directory-domain-services/faqs.yml).
+HDInsight can't depend on on-premises domain controllers or custom domain controllers, as it introduces too many fault points, credential sharing, DNS permissions, and so on. For more information, see [Azure AD DS FAQs](../../active-directory-domain-services/faqs.yml).
### Azure AD DS instance
For more information, see [Azure AD UserPrincipalName population](../../active-d
### Password hash sync * Passwords are synced differently from other object types. Only non-reversible password hashes are synced in Azure AD and Azure AD DS
-* On-premise to Azure AD has to be enabled through AD Connect
+* On-premises to Azure AD has to be enabled through AD Connect
* Azure AD to Azure AD DS sync is automatic (latencies are under 20 minutes). * Password hashes are synced only when there's a changed password. When you enable password hash sync, all existing passwords don't get synced automatically as they're stored irreversibly. When you change the password, password hashes get synced.
healthcare-apis Iot Connector Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-power-bi.md
Title: MedTech service Microsoft Power BI - Azure Health Data Services
-description: In this article, you'll learn how to use the MedTech service and Power BI
+description: In this article, you'll learn how to use the MedTech service and Power BI
In this article, we'll explore using the MedTech service and Microsoft Power Bus
## MedTech service and Power BI reference architecture
-The reference architecture below shows the basic components of using Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR&#174;) data.
+The reference architecture below shows the basic components of using Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR&#174;) data.
You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). :::image type="content" source="media/iot-concepts/iot-connector-power-bi.png" alt-text="Screenshot of the MedTech service and Power BI." lightbox="media/iot-concepts/iot-connector-power-bi.png":::
-MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud.
+MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud.
We do encourage the use of Azure IoT services to assist with device/gateway connectivity.
We do encourage the use of Azure IoT services to assist with device/gateway conn
For some solutions, Azure IoT Central can be used in place of Azure IoT Hub.
-Azure IoT Edge can be used in with IoT Hub to create an on-premise endpoint for devices and/or in-device connectivity.
+Azure IoT Edge can be used in with IoT Hub to create an on-premises endpoint for devices and/or in-device connectivity.
:::image type="content" source="media/iot-concepts/iot-connector-iot-edge-power-bi.png" alt-text="Screenshot of the MedTech service, IoT Hub, IoT Edge, and Power BI." lightbox="media/iot-concepts/iot-connector-iot-edge-power-bi.png":::
healthcare-apis Iot Connector Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-teams.md
Title: MedTech service and Teams notifications - Azure Health Data Services
-description: In this article, you'll learn how to use the MedTech service and Teams notifications
+description: In this article, you'll learn how to use the MedTech service and Teams notifications
In this article, we'll explore using the MedTech service and Microsoft Teams for
## MedTech service and Teams notifications reference architecture
-When combining MedTech service, a Fast Healthcare Interoperability Resources (FHIR&#174;) service, and Teams, you can enable multiple care solutions.
+When combining MedTech service, a Fast Healthcare Interoperability Resources (FHIR&#174;) service, and Teams, you can enable multiple care solutions.
-Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, FHIR, and Teams Patient App.
+Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, FHIR, and Teams Patient App.
You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). :::image type="content" source="media/iot-concepts/iot-connector-teams.png" alt-text="Screenshot of the MedTech service and Teams." lightbox="media/iot-concepts/iot-connector-teams.png":::
-The MedTech service for can ingest IoT data from most IoT devices or gateways regardless of location, data center, or cloud.
+The MedTech service for can ingest IoT data from most IoT devices or gateways regardless of location, data center, or cloud.
We do encourage the use of Azure IoT services to assist with device/gateway connectivity.
We do encourage the use of Azure IoT services to assist with device/gateway conn
For some solutions, Azure IoT Central can be used in place of Azure IoT Hub.
-Azure IoT Edge can be used in with IoT Hub to create an on-premise end point for devices and/or in-device connectivity.
+Azure IoT Edge can be used in with IoT Hub to create an on-premises end point for devices and/or in-device connectivity.
:::image type="content" source="media/iot-concepts/iot-connector-iot-edge-teams.png" alt-text="Screenshot of the MedTech service and IoT Edge." lightbox="media/iot-concepts/iot-connector-iot-edge-teams.png":::
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md
Title: Move from IoT Central to a PaaS solution | Microsoft Docs description: How do I move between aPaaS and PaaS solution approaches?--++ Last updated 06/09/2022
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
Title: Take a tour of the Azure IoT Central API | Microsoft Docs
description: Become familiar with the key areas of the Azure IoT Central REST API. Use the API to create, manage, and use your IoT solution from client applications. Previously updated : 01/25/2022 Last updated : 06/10/2022
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md
Title: Take a tour of the Azure IoT Central UI | Microsoft Docs description: Become familiar with the key areas of the Azure IoT Central UI that you use to create, manage, and use your IoT solution.-- Previously updated : 12/21/2021++ Last updated : 06/10/2022
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
Title: What is Azure IoT Central | Microsoft Docs
description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. It helps to reduce the burden and cost of IoT management operations, and development. This article provides an overview of the features of Azure IoT Central. Previously updated : 12/22/2021 Last updated : 06/09/2022
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-configure-rules.md
Title: Quickstart - Configure rules and actions in Azure IoT Central
description: This quickstart sho